Bryan Friedman

The Evolving Technologist: Adventures of a Recovering Software Generalist

Career Refactoring

It was 11 years ago (to the day, if you can believe that) that I started a new job after leaving my first job out of college. (Fun fact: that was also an 11 year run.) Since then, I’ve been charting my career journey in this space. It sure has been quite a ride, filled with diverse roles, inspiring leaders, and wildly different company cultures.

Most of my time has been spent inside large enterprise companies with tens of thousands of employees. In my most recent roles, I’ve even been focused on selling software and application platforms to enterprise customers, giving me a unique view from both sides of the table.

That said, I’ve also had a couple of stints in startup and startup-like environments with as few as 100 people. The contrast between those experiences and the enterprise world is stark. It reminds me of some movie quotes...

“There’s a difference between knowing the path and walking the path.” Enterprises tend to be structured. In fact, with rigid processes, strictly defined roles, and lots of layers, I’d say they are often too structured. By contrast, in startups you’ll find more fluid responsibilities with a frequent need to adapt on the fly.

“All we have to decide is what to do with the time that is given to us.” Startups move fast. Decisions happen quickly. Iterations happen faster. In a big company, on the other hand, getting anything done usually means wading through a frustrating swamp of cross-functional alignment meetings, approvals, and never-ending loops.

“The study of pressure and time.” Sure, enterprises come with an abundance of resources, but agility usually isn’t one of them. Startups might be resource-constrained, but this has a way of forcing creative thinking, building resilience, and leading to more innovative outcomes.

“Old and busted, new hotness.” Startups can build with the latest tools, trends, and tech from the ground up. Enterprises, meanwhile, are often tied to legacy systems and are forced to drag a lot of baggage along for the ride. It’s much harder to steer the ship into new waters.

While I have not spent the majority of my career in startups, I’ve loved the time that I have. I still vividly remember my first exposure to startup speed. A bug was discovered, and a fix was coded, tested, and pushed to production all within an hour. My mind was blown. That one moment taught me more than months in the enterprise, and I got to tap into skills I didn’t even know I had.

Eventually, though, I got sucked back into the enterprise machine, and I didn’t fully realize how much it had started to wear on me. The longer I stayed, the more my disillusionment grew as it chipped away at my energy and motivation to ultimately break me down.

Now, at last, I am building myself back up. I’m thrilled to say, I’m heading back into startup-land. Today, I’m joining Moderne as a Technical Marketing Lead. It checks so many boxes for me.

True Modernization. Moderne is tackling a challenge close to my heart: improving code quality and reducing technical debt at scale through automated refactoring. As a former product manager, I still have scars from punting on feature work so the team could upgrade dependencies, migrate to TypeScript, or swap logging libraries. The opportunity to improve developer productivity and enable tech stack liquidity, particularly for enterprise companies with massive code bases, is incredibly exciting.

Closer to Code. After years in infrastructure and application platforms, it feels good to get closer to where software actually gets written. I may be in a marketing role, but I’ll still get to frequently nerd out about parsers, visitor patterns, and Lossless Semantic Trees (LSTs) thanks to the magic of OpenRewrite, the open source project powering Moderne’s platform.

AI That Matters. The AI boom has been overwhelming, but Moderne isn’t just bolting on AI for buzz. They’re thoughtfully weaving it into the platform, using a hybrid approach to combine their rules-based system of deterministic recipes and balancing it with all that LLMs and machine learning brings to the table.

Broad Skill Application. I’ve always gravitated toward roles that blend technical expertise and depth with strength in soft skills like communication, collaboration, storytelling, and problem-solving. Moderne’s small and nimble team gives me the chance to wear multiple hats and contribute wherever I’m needed most.

People I Respect. I’m lucky to be joining a team full of folks I’ve admired for a while. It’s energizing to be surrounded by smart, driven people. Plus, there’s a strong Java foundation here that keeps me connected to my friends in the Spring community.

Remote First. The Moderne team is globally distributed, and I’ve been working remotely since before it was cool. While I certainly appreciate in-person meet-ups on occasion, async communication suits me just fine. I've been able to build trust through consistent delivery rather than relying on physical presence, and with today’s collaboration tools, it’s easy for remote teams to stay connected and effective.

As I step into this next chapter, I’m excited to help reshape how developers write and maintain software by making refactoring easier, faster, and smarter. Let’s go!


Automated Refactoring Meets Edge Deployment: An Exploration of OpenRewrite and EVE-OS

I know from my experience working for and with enterprise companies that keeping dozens or hundreds (or thousands!) of apps up to date is complicated. Much of my career in tech has been spent in and around the cloud-based platform and modern application development spaces in an attempt to help solve this problem for customers. But I also spent time as a product manager working directly with developers, so I’ve seen how even with automated CI/CD pipelines, modern app architectures, and robust app platforms, it ultimately comes down to effectively managing a code base and often tackling mountains of tech debt along the way. I remember having to spend precious sprint cycles on cleaning up and refactoring whole swaths of code instead of focusing on delivering features for end users.

I’ve also seen over the past many years how even the most successful moves to cloud can still lead to a lot of challenges when it comes to data migration. Plus, with the explosion of Internet-of-Things (IoT) devices, it’s getting more and more difficult to ship data off to the cloud for processing. It’s been fun to watch the trend towards edge computing to combat these obstacles, but of course, that brings its own set of challenges from a scaled management perspective. I remember working on this almost ten years ago with automated bare metal hardware deployments, but now there is even more to consider!

These are hardly solved problems, but thankfully, a few of my former colleagues have ended up at companies where they are addressing them with some very innovative solutions. In my career, I’ve been extremely lucky to meet and work with some truly smart people, and one of the perks of knowing so many sharp folks in tech is that just by following their career paths, I can keep up to date with a lot of industry trends and get exposed to technologies that are new to me. This is how I became aware of two open-source projects that I’ve recently been exploring...

OpenRewrite

OpenRewrite is an open-source tool and framework for automated code refactoring that’s designed to help developers modernize, standardize, and secure their codebases. With all the tech debt out there among enterprise teams managing large Java projects in particular, OpenRewrite was born to work with Java, with seamless integration into build tools like Gradle and Maven. But it’s now being expanded to support other languages as well.

Using built-in, community, or custom recipes, OpenRewrite makes it easy to apply any changes across an entire codebase. This includes migrating or upgrading frameworks, applying security fixes, and imposing standards of style and consistency. The OpenRewrite project is maintained by Moderne, who also offers a commercial platform version that enables automated refactoring more efficiently and at scale.

EVE (Edge Virtualization Engine)

EVE is a secure, open-source, immutable, lightweight, Linux-based operating system designed for edge deployments. It’s purpose-built to run on distributed edge compute and to provide a consistent system that works with a centralized controller to provide orchestration services and a standard API to help manage a fleet of nodes. Think about having to manage hundreds (or more!) of small-form-factor devices like Raspberry Pis, or NUCs that are running in all sorts of places across different sites.

With EVE-OS, devices can be pre-configured and shipped to remote locations to limit the need for on-site IT support. And with its Zero Trust security model, it protects against any bad actors who may easily gain access to these edge nodes that often live outside of the protection of a formal data center. Because it is hardware agnostic and supports VMs, containers, Kubernetes clusters, and virtual network functions, it also has the ability to run applications in a variety of formats. EVE-OS is developed by ZEDEDA specifically for edge computing environments and aims to solve some of these unique challenges around running services and applications on the edge. They also offer a commercial solution for more scalable orchestration, monitoring, and security.

Let’s Build Something!

There isn’t exactly an obvious intersection of interest here, but bumping into these projects independently, right around the same time, got me thinking about how I could experiment with both of them and build something that balances practical OpenRewrite usage with something deployable via EVE-OS. This is what I came up with:

  1. Write a very simple but somehow outdated Spring Boot REST app
  2. Use OpenRewrite to refactor and “modernize” it
  3. Containerize the resulting modern app
  4. Deploy it to an EVE-OS “edge node” [locally]

Of course, this only scratches the surface of the potential that these technologies have, but it turned out to be a pretty fun exercise for getting started by just dipping my toe a bit into each of these areas. In case you’re interested in getting your feet wet too, I’ve summarized the steps I took below, including a link to the code I used.

Refactoring a Simple Legacy Spring Application

As a developer, my Java knowledge is admittedly relatively surface level, but I do know enough to write a working REST controller. Here’s my simple class that just calls a basic endpoint and spits back out its JSON result:

package com.example;

import org.springframework.web.bind.annotation.*;
import org.springframework.web.client.RestTemplate;
import org.springframework.http.MediaType;

@RestController
public class HelloController {

    @RequestMapping(value = "/", method = RequestMethod.GET, produces=MediaType.APPLICATION_JSON_VALUE)
    public String hello() {
        System.out.println("Calling external service...");
        RestTemplate client = new RestTemplate();
        String response = client.getForObject("https://httpbin.org/get", String.class);
        return response;
    }
}

My Spring skills are pretty outdated, so I would say a refactor is most certainly in order. Accordingly, I figured I’d use OpenRewrite to accomplish three primary things when updating this code:

  • Use the newer dedicated @GetMapping as an alternative for @RequestMapping
  • Use the SLF4J Logger instead of the elementary System.out.println
  • Upgrade from Spring Boot 2.x to 3.x
    • I didn’t show my pom.xml file here, but I used version 2.3 and will upgrade to 3.2

There are definitely other things I could choose to update. For example, I didn’t opt to write test cases in a test class, but if I had I could also have migrated from JUnit 4 to 5. I also saw some articles that suggested updating RestTemplate to RestClient or even the asynchronous WebClient. I didn’t find any recipes for this, though I could maybe tackle writing a custom one, but I left that out of scope for now. I’m satisfied with this limited example.

Since I first learned to build Spring apps with Maven, that’s what I opted to use here (but there is support for Gradle as well). The basic Maven plugin command to run for OpenRewrite is mvn rewrite:run, but that requires defining configuration and parameters in pom.xml. I wanted to keep everything dynamic and on the command line, so I passed everything in using the -D flag to define the properties:

$ mvn -U org.openrewrite.maven:rewrite-maven-plugin:run \
      -Drewrite.exportDatatables=true \
      -Drewrite.recipeArtifactCoordinates=org.openrewrite.recipe:rewrite-spring:RELEASE \
      -Drewrite.activeRecipes=\
        org.openrewrite.java.spring.boot3.UpgradeSpringBoot_3_2,\
        org.openrewrite.java.spring.NoRequestMappingAnnotation,\
        com.example.ReplaceSystemOutWithLogger

You can see the three active recipes that I passed in to perform the tasks I outlined above. The first two are recipes straight from the OpenRewrite catalog. The last one is too, sort of, but in order to pass it the necessary configuration options, I created a rewrite.yml file in the root of the project:

type: specs.openrewrite.org/v1beta/recipe
name: com.example.ReplaceSystemOutWithLogger
recipeList:
  - org.openrewrite.java.logging.SystemOutToLogging:
      addLogger: "True"
      loggingFramework: SLF4J
      level: info

This specifies what logging framework and log level to use. The active recipe references whatever name is used here, hence com.example.ReplaceSystemOutWithLogger.

And that’s it. Running the mvn command above does the magic, fixing the pom.xml file to reference Spring Boot 3.2 and updating the controller code as follows:

package com.example;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.MediaType;

@RestController
public class HelloController {
    private static final Logger logger = LoggerFactory.getLogger(HelloController.class);

    @GetMapping(value = "/", produces=MediaType.APPLICATION_JSON_VALUE)
    public String hello() {
        logger.info("Calling external service...");
        RestTemplate client = new RestTemplate();
        String response = client.getForObject("https://httpbin.org/get", String.class);
        return response;
    }
}

Notice @GetMapping has replaced @RequestMapping and the System.out.println has been moved to use a logger instead. The code still builds and runs fine, but now it’s up-to-date!

Here’s the repository with the full set of code: https://github.com/bryanfriedman/legacy-spring-app. It has the original code in main and the updated code on the refactor branch so you can use git diff main..refactor or your favorite diff tool to compare.

Deploying the Refactored App to an EVE “Edge Node”

Now that we have a running, refactored app, let’s deploy it to “the edge.” But first, we need an EVE node. The easiest way to setup a virtual EVE node locally, it turns out, is to use a tool called Eden (clever) as a management harness for setting up and testing EVE. Eden will also help us create an open-source reference implementation of an LF-Edge API-compliant controller called Adam (also clever) which we will need to control the EVE node via its API. Eden is neat because it lets you deploy/delete/manage nodes running EVE, the Adam controller, and all the required virtual network orchestration between nodes. It also lets you execute tasks on the nodes via the controller.

To accomplish this setup, I mostly followed an EVE Tutorial that I found which was extremely helpful. It outlines the process of building and running Eden and establishing the EVE node and Adam controller. However, this tutorial was written for Linux, so I ran into a few snags that didn't work in my MacOS environment. As such, I ended up forking eden and tweaking a few minor things just to get it to work on my machine. This mostly involved getting the right qemu commands to make the environment run. You can see the specifics here in the forked repo. And of course, while the tutorial describes how to run a default nginx deployment to test things out, I obviously deployed this Spring app instead. I also discovered that I needed to specifically configure the port forwarding for the deployed pod in question in order to reach the app for testing.

Here are the slightly modified steps that I took:

Prerequisites

I installed all the following prerequisites if they weren't already installed, using brew where possible, or otherwise downloading and installing: make, qemu, go, docker, jq, git.

Prepare and Onboard EVE

  1. Start required qemu containers:
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
  1. Build Eden ( used my fork as indicated above):
$ git clone https://github.com/bryanfriedman/eden.git && cd eden/
$ make clean
$ make build-tests
  1. Setup Eden configuration and prepare port 8080 for our app:
$ ./eden config add default
$ ./eden config set default --key eve.hostfwd --value '{"8080":"8080"}'
$ ./eden setup
  1. Activate Eden:
$ tcsh
$ source ~/.eden/activate.cs
  1. Check status, then onboard EVE:
$ ./eden status
$ ./eden eve onboard
$ ./eden status

Deploy the app to EVE

  1. Deploy the Spring app from Docker Hub:
$ ./eden pod deploy --name=eve_spring docker://bryanfriedman/legacy-spring-app -p 8080:80
  1. Wait for the pod to come up:
$ watch ./eden pod ps
  1. Make sure it works:
$ curl http://localhost:8080

Conclusion

After all this work, I’m not exactly an expert in automated refactoring or edge computing all of the sudden, but I do have a much better understanding of the technologies behind these concepts. While they might not seem particularly related, I can definitely see how a company might be interested in both of these paradigms as they look to modernize their apps at scale and potentially look at migrating them to run at the edge. With just these rudimentary examples, you can start to see the potential of the power they can provide at scale.


Everything is Product

I can’t believe it’s been more than ten years since I first became a product manager. Since then, I’ve been on a “hybrid career” journey, and have worked in a lot of different related areas over the past several years including product marketing, competitive intelligence, developer relations, customer education, and technical content development, all while playing both individual contributor and leadership roles.

For me, though, it’s been sort of a “once a product manager, always a product manager” situation through all of it. Actually, I have brought the things I learned from product management into just about everything I’ve done, even outside of my career. Thinking about things from a user perspective, creating a closed feedback loop, working iteratively, failing fast, and staying agile (and sometimes Agile), are all strategies that can be applied successfully in quite a few different areas. Why is that?

Everything is product. (That’s why.) Except many companies, including some that I have worked for unfortunately, only consider products to be those tangible items or software applications that directly generate revenue. If you have ever been stuck in that situation, you understand my frustration. Thankfully I’ve seen the other side of the story as well. The best companies and leadership teams have a broader perspective, recognizing that everything with users should be considered a product, regardless of its primary function or revenue-generating potential.

Redefining Product

A product, at its core, is something that provides value to its users. By this definition, many aspects of a business that might not traditionally be viewed as products actually fit the bill.

Internal tools and processes

Your employees are customers too! Just because a particular application is only used within the context of doing business doesn’t mean it shouldn’t consider the needs of those users. A company’s culture should reflect its attitude toward customers. If it doesn’t care about its own internal users, why should customers trust it to care about them?

Customer support systems

Anything your customers interact with is a product, even those adjacent tools that might not be part of the primary product you are selling. This includes ticketing applications, automated phone trees, email notifications, and any other experiences that support the customer along the way. These are part of the overall customer experience, so you may not be thinking of it as a product, but the users certainly are.

Company websites and documentation

A website is a product! This is especially true if it’s a user portal or a documentation or reference site. Your core products are an extension of your brand. If the customer’s interaction with your branded content is subpar, they will view all of your products that way.

Driving Value with a User-Centric Approach

In case it isn’t clear already, the reason for viewing everything as a product is that it allows the adoption of a more user-centric approach. This shift in mindset can lead to significant improvements across all areas of a business:

  • Enhanced user experience: By treating internal tools and supporting systems as products, we focus on making them more intuitive and efficient for users.
  • Better customer satisfaction: When every touchpoint is treated as a product, the overall customer journey improves.
  • Continuous improvement: Perhaps most importantly, the product mindset encourages regular updates and iterations based on user feedback.

I can hear those revenue-obsessed executives now. “Why does this matter?” I understand the thinking. There’s no company without revenue. But if a company is only focused on this metric, they are missing the hidden value of non-revenue products that are created in less obvious ways:

  • Customer retention: Well-designed products improve satisfaction and reduce churn. (Internally, this also means employee retention and reduced turnover.)
  • Brand perception: User-friendly websites and documentation enhance brand image and indirectly drive sales.
  • Operational efficiency: Internal tools treated as products can significantly reduce costs and improve productivity.

Adopting the "Everything is Product" Mindset

Changing your mindset to think about everything as product doesn’t mean you have to be completely capital-A Agile and go full-fledged SAFe or Scrum when building your company website. You can if it fits the situation, but all you really need to do is think (and plan) like a product manager.

To implement this approach effectively, you have to start by identifying and understanding your users, like any good product manager does. Once you know who they are, then you can gather feedback from them regularly and analyze user feedback accordingly. This is where product analytics tools can come in handy for quantitative metrics, and many of these tools now even have ways to gather qualitative input as well through surveys or ratings features.

With this feedback comes (hopefully) a closed loop of iteration and improvement. Use the feedback to continuously enhance and update the product. (If we keep calling it a product, maybe it will eventually become one.) Bonus points for measuring success with KPIs to track the performance and value. Then you can show your bosses how valuable this non-revenue generating thing actually is.

Conclusion

Truthfully, I’m kind of annoyed that I even felt I had to write this. This concept of everything being a product has become so apparent to me, so blatantly obvious, that I found myself in disbelief the last time was told that it wasn’t the case in a particular situation. Watching a once great and valuable product that was loved by customers be gutted and replaced by cumbersome templated muck was painful. But it did help clarify what kinds of leaders and companies I’d be happy working for.

I want an environment where leadership embraces the "Everything is Product" philosophy. That way the business can unlock hidden potential, improve user satisfaction across the board, and create a culture of continuous improvement.

If it has users, it's a product - and it deserves the same level of attention, design thinking, and iterative improvement as any flagship offering.


Learning the Importance of Learning

My daughter's elementary school has this thing called the Growth Mindset Program. I didn't really pay much attention to what that meant when I first heard it. But as she progressed through a year or two of school, it came up more and more. So I figured it was probably time to figure out what it means.

I asked my wife, the teacher. She told me it can be explained simply as "the power of yet." When children struggle with something, instead of thinking "I can't do it" we help them frame things differently so they say "I can't do it...yet." See how that works? Now they know it's just a process and they'll get there eventually.

That was enough for me to feel like I understood it. It makes sense for developing young minds to think that way. I felt lucky that my kid was in a school that had such high order thinking.

Then it came up again. My wife and I were helping our daughter navigate some challenges she was facing due to some perfectionist tendencies, and we came across a book called Bubble Gum Brain. It's a tale of two kids with different brains. Bubble Gum Brain likes to stretch his mind and learn new things without worrying about mistakes, but Brick Brain figures there's no way to change things so it's not worth trying.

Riveting fiction. But it actually helped. And in case it's not obvious, it's about Growth Mindset. (I also discovered a good book called Giraffes Can't Dance that has a similar message in a slightly more subtle fashion. It could more accurately be titled Giraffes Can't Dance...Yet.)

So that was that. Now my daughter was better able to manage her bouts of perfectionism by thinking about bubble gum and giraffes. Parenting achievement unlocked.

Then it came up again. Except this time, it wasn't the eight year-old. It was Twitter. And it actually came up a lot. My Twitter feed is primarily filled with cloud computing and tech pundits and professionals (with a smattering of comedians and baseball reporters just to confuse and entertain). So I was surprised to see an elementary school education concept come up with some regularity from this crowd.

I'm sure you're way ahead of me here. My brick brain had taken this long to realize that this wasn't just for kids. In fact, maybe there was something to the fact that my own kid had been struggling with perfectionism. Have they found that strand in the human genome yet?

Yes, Growth Mindset is a thing. Once I began down the internet rabbit hole, I realized just how much of a thing it is. "The power of yet" isn't just something my wife made up to explain it to me. There are gobs of research, books, articles, and videos about it. And it's something that requires real cultivation. If getting everything perfect on the very first try is something frequently lauded, it's not a great environment for growth.

Thankfully, I've been lucky enough for the past several years to work in organizations and for managers that heavily value learning and actually do embrace a Growth Mindset. I just never put a name to it. (I've been in the opposite situation too so it's nice to have some perspective on it.)

So despite me burying my head in the sand about it for so long, I've been attempting to tap into my bubble gum brain as much as possible. I'm working on being less affected by a fear of failure and trying hard to celebrate my mistakes as part of the learning process. I guess what they say is true. You can learn from your kids.

Why am I writing about this now? Because at the start of this new year I've taken on a new role at VMware, leading a small team and focusing on developer engagement to help enterprise developers learn about and get started using VMware Tanzu. I've been a developer before, but this particular experience is a new one for me. And some days I feel like I don't know how to do it...

...yet.


SpringOne Platform 2018 - Let’s Get Technical

SpringOne Platform is known for showcasing some of the most compelling customer stories you’ll find at any tech conference. Last year, we heard from many leading companies about how they are getting better at software. This year, there were more amazing tales of transformation from enterprise leaders. It’s safe to say that “it’s still about outcomes.”

But behind all these great outcomes is a lot of cool tech! I attended quite a few technology-focused sessions this year. They got me excited about the various announcements throughout the week. There was all the stuff you’d expect at a conference called “SpringOne Platform” — new versions of Spring components, Java 11 talk, and platform releases for PCF and PKS. Then there were so many other tech topics that showed up too. I found these five to be the most intriguing:

1. Continuous Everything (CI/CD)

We’ve heard the virtues of continuous integration and continuous delivery at SpringOne before. There’s plenty to be found on the power of Concourse as a CI tool. Even PCF operators are big on Concourse and its ability to provision and repave the platform. This year, there was talk of inviting a new friend to the party.

What Happened?

Pivotal announced that it “has a team working on contributing Cloud Foundry support to open source Spinnaker.” Spinnaker already supported AWS, Azure, GCP, Google App Engine, Kubernetes, and other platforms. Now, Pivotal is ensuring that Cloud Foundry is a first-class citizen of Spinnaker. Jon Schneider covered this on the main stage and in detail in a breakout session.

Why Is It Cool?

Spinnaker is one of the few true multi-cloud delivery platforms. Started by Netflix, it has contributions from Google, Amazon, Microsoft, and now Pivotal. There are two essential components: a multi-cloud application inventory and pipelines.

The inventory piece is critical, since applications rarely live on a single platform. Spinnaker presents an aggregate view of all your applications, clusters, and instances. (It can do this without even having deployed them.) This allows users to determine their application health and state across platforms. It also means Spinnaker is distinctly able to run out-of-band processes. As a result, it supports running things like vulnerability scanning or chaos engineering tooling at build time.

Along with the inventory, as you’d expect from a CD solution, Spinnaker offers pipelines. Even if you are a user of Concourse, Jenkins, or other CI tools, Spinnaker is best suited to help with these delivery aspects of your pipeline.

How Can I Get Started?

Check out Spinnaker on GitHub and at https://www.spinnaker.io/. Keep an eye out for the 1.10 release which will include an early version of Cloud Foundry support.

2. Secure Credentials

Presentations about security topics don’t always offer the most gripping demos. Still, I was very interested in a few of the breakout sessions on CredHub, the credential manager that’s baked right in to Cloud Foundry. It turns out, security can be seductive.

What Happened?

CredHub is the credential manager that’s baked in to Cloud Foundry. In his three sessions, Peter Blum offered a few different looks at CredHub. There was a great overview of how it works with PCF and Spring. His most fascinating session, though, brought the magic of CredHub together with Kubernetes.

In his example, a webhook object in Kubernetes injects CredHub into pods on the cluster. Then application code in the pods may access secrets from the credential store. It was a slick demo and an incredible way to show off CredHub’s simplicity and strong capabilities. Peter’s CredHub with Kubernetes code is on GitHub!

Why Is It Cool?

CredHub offers a secure way for humans and applications to interact with secrets. With Pivotal Application Service and the CredHub Service Broker, developers never have to know or see any passwords. Passwords are only available to application containers with authorized access. Each application container includes a signed certificate and key. This key provides identity when communicating with CredHub.

How Can I Get Started?

There are tons of amazing CredHub resources out there. Check out some of the recent blog posts from my colleagues:

You can also go straight to the CredHub Docs or the GitHub repo for more detailed info. For you Spring buffs, there’s even a Spring CredHub project.

3. Serverless

What’s a tech conference today without the mention of serverless? SpringOne definitely had its share of serverless moments. Of course, Pivotal Function Service (coming soon) got a shout-out from Onsi Fakhouri. Plus, there were plenty of other details covered about Knative and riff at the conference.

What Happened?

Mark Fisher did a live demo of riff on the main stage. There were also some very informative and demystifying sessions on Knative and riff. They ranged from YAML-heavy to YAML-free with one especially for Spring developers.

Why Is It Cool?

At SpringOne Platform last year, Pivotal announced riff, an open source serverless framework. Earlier this year, Pivotal revealed that riff was replatformed on top of Knative. This is the technology that is driving Pivotal's serverless future. Knative and riff will power the yet-to-be-released Pivotal Function Service.

How Can I Get Started?

Check out https://projectriff.io and https://pivotal.io/knative for more details. You can also find both riff and Knative on GitHub.

4. Buildpacks Everywhere

Containers and Kubernetes are hot topics at conferences. What's even hotter? Taking control of the application lifecycle in a container-centric world. Developers want a fast and secure way to get from source to container. It’s something the Cloud Foundry community has had solved for a while with buildpacks. Now this solution is expanding.

What Happened?

Day 1 main stage had a surprise ending from Stephen Levine from Pivotal and Terence Lee from Heroku. They introduced an effort to bring buildpacks to the broader cloud-native community. It's called Cloud Native Buildpacks, and it joins the CNCF as a sandbox project today.

Why Is It Cool?

Buildpacks are an “opinionated, source-centric way to build applications.” They are a big part of the magic behind Cloud Foundry’s `cf push` experience. Buildpacks detect the kind of app then fetch and install the tools needed to run it. For operators, the ability to manage a curated set of buildpacks is attractive. It also allows for rapid, secure patching en-mass using remote image layer rebasing. All the while, developers simply focus on delivering value for their own customers. The new specification and set of tools enable buildpacks to be used on any platform.

How Can I Get Started?

Check out https://buildpacks.io/ for more info. Meanwhile, use the `pack` CLI to experiment with Cloud Native Buildpacks.

5. Reactive Programming

Reactive programming is not a new concept for SpringOne Platform attendees. I vividly remember Phil Webb’s awesome keynote from last year comparing blocking with non-blocking. (Who can forget the swimming ducks and cats?) This year there was more Reactive-related fun.

What Happened?

There were two impressive keynotes relevant in the Reactive programming space. First, there was the introduction of the non-blocking relational database connectivity driver, R2DBC. We also learned about RSocket, a new message-based application network protocol.

Why Is It Cool?

In two articles on InfoQ, Charles Humble examines both R2DBC and RSocket. He does an amazing job explaining the advantages of Reactive programming. As Pivotal's Ben Hale explains in one article, "Reactive programming is the next frontier in Java for high efficiency applications." He points out two major roadblocks to Reactive programing: data access, and networking. R2DBC and RSocket aim to address these problems.

I found RSocket to be particularly fascinating. In the main stage presentation, Stephane Maldini gave a brief but helpful history of TCP and HTTP. He framed RSocket as an alternative to these protocols while sort of bringing the best of each to bear. Rather than simply request/response, RSocket offers four different interaction models. (They are Request/Void, Request/Response, Request/Stream, and Stream/Stream.) What's more, it's language-agnostic, bi-directional, multiplexed, message-based, and supports connection resumption. It kind of blew my mind.

How Can I Get Started?

As always there’s a .io site for RSocket (http://rsocket.io/) and an RSocket GitHub repo. R2DBC is on GitHub too. It’s also worth checking out the related content from the conference. Ben Hale covered both R2DBC and RSocket in his sessions.

Next Year in Austin!