Bryan Friedman: Clouding Up

My Journey from Enterprise IT to the Cloud

Author: Bryan Friedman

My Interoperable Opinions of Cloud Foundry Summit 2018

Last week I visited Boston for the first time and attended my very first Cloud Foundry Summit. I also took the opportunity while I was there to make my first visit to Fenway Park. It was a fabulous week of firsts for me.

As with any conference, one measure of excellence is the amount of quality examples of customer success stories. It’s also nice to see compelling demos of new and interesting technology. CF Summit 2018 did not disappoint in either of these departments. In fact, my colleagues have already written quite eloquently on these topics. So I’ll spend some time on something else that was a key theme of the conference.

Interoperability FTW

Interoperability was an explicit thread through many of the keynotes and breakout sessions. Cloud Foundry Foundation CTO Chip Childers even hinted at this trend back in January.

To be sure, Cloud Foundry tech has always championed interoperability. It’s multi-cloud. It’s polyglot. It’s OCI-compliant. The Open Service Broker API was even born of Cloud Foundry. (It’s now been adopted by the Kubernetes community.) It was fantastic to see these concepts expand even more this year.

There was the introduction of Alibaba Cloud as a BOSH CPI. Some awesome advances in .NET support appeared (plus a whole conference track to go along with it). Kubernetes was also mentioned quite a bit as the Cloud Foundry Container Runtime continues to take hold.

Indeed, it’s nice to see this interoperability movement flourish. Still, I couldn’t help but think of how it relates to another critical part of Cloud Foundry’s success.

Opinions Are Like… Everybody’s Got One

Yes, it embraces interoperability. Yet Cloud Foundry has always been billed as an opinionated platformSo it’s important to point out that “interoperable” and “opinionated” are not mutually exclusive. But they are equally important characteristics for an effective platform. Interoperability without opinions runs the risk of becoming complicated or difficult to use. But of course, opinions without interoperability may prove irrelevant. After all, a good platform has to be able to handle many types of workloads. It should integrate with the services and technologies that you need to use.

So both are important. But in my previous life working in IT, I’ll admit I wasn’t in the opinionated camp. I didn’t even understand it as a concept. I generally went for selecting software with the ultimate flexibility. What I didn’t realize was how often this led to analysis paralysis and decreased productivity.

I remember one of the last projects I worked on. We were selecting a software product for financial planning and reporting. Ideally, we’d have found a solution that did 80% of what was required. We should have reevaluated the actual importance of the other 20% we thought we needed. Instead, we focused on that 20% until we settled on something that could handle it. Then implementation details, changing requirements, and complex technology got in the way anyway. As I recently heard one industry analyst say, “Choice is not a differentiator.”

Unfortunately, I had not yet learned about the value that opinionated software can bring. It’s about a simplified user experience and increased productivity. I like how Duncan Winn describes it in his book, Cloud Foundry: The Definitive Guide:

When you look at successful software, the greatest and most widely adopted technologies are incredibly opinionated. What this means is that they are built on, and adhere to, a set of well-defined principles employing best practices. They are proven to work in a practical way and reflect how things can and should be done when not constrained by the baggage of technical debt. Opinions produce contracts to ensure applications are constrained to do the right thing.

Platforms are opinionated because they make specific assumptions and optimizations to remove complexity and pain from the user. Opinionated platforms are designed to be consistent across environments, with every feature working as designed out of the box. For example, the Cloud Foundry platform provides the same user experience when deployed over different IaaS layers and the same developer experience regardless of the application language. Opinionated platforms such as Cloud Foundry can still be configurable and extended, but not to the extent that the nature of the platform changes…

That last part is key: “…can still be configurable and extended…” Remember, interoperability still matters. It just can’t happen at the expense of complexity. That’s why something like the Open Service Broker API is so elegant and powerful.

There’s an interesting nugget there at the beginning of Duncan’s description too: “…they are built on…well-defined principles…” It’s not only how the software works but also what it’s built on. The architecture is opinionated as well. A lot of times that means selecting a particular set of technologies or patterns and incorporating them together in a specific way. Basically: curation.

An Ounce of Productivity is Worth a Pound of Curation

Okay, so this play on a Benjamin Franklin quote isn’t exactly a perfect analogy. But the point is, as I’ve recently heard a customer quoted: “Curation is how we get stuff done!”

In the consumer world, we enjoy the benefits of curation daily. We trust companies like Netflix to suggest movies and television we will like. We look to Amazon to tell us what we like to buy. Our Facebook and Twitter feeds are filtered for us. These are the modern giants of content curation. They use algorithms and AI to keep things relevant, but people still drive the behavior. Plus, think about traditional television or radio news, or even used bookstore or boutique owners. We embrace curation in our daily lives.

In the business and IT world, however, it seems like curation is often avoided. Remember the 20%? Sometimes the customer knows better and doesn’t buy into an opinionated architecture. They insist on defining it themselves. It’s true that curation may not be for everyone. Under the right circumstances, though, it can help save a lot of time and headaches. Determine where you are on the curation scale and pick the right solution. If you trust the curator, they can help.

At CF Summit, I attended many talks about Kubernetes and its role within the Cloud Foundry ecosystem. As Onsi Fakhouri spoke about at SpringOne Platform late last year, it’s an and conversation, not or. It’s not about Kubernetes vs. Cloud Foundry, but rather how can they interoperate? Or, more specifically (and more opinionated), how should they interoperate?

This was a popular topic at CF Summit this year. Right now, Cloud Foundry has a few ways it interoperates with Kubernetes. Most prominently it’s a separate container runtime (as opposed to the application runtime). Some things fit better on the container runtime (like stateful workloads, ISV container images). Some are made for the application runtime (12-factor apps, microservices, etc.). The opinion right now is that it all depends on the use case.

Other examples and conversations about Kubernetes interoperability showed up at the conference too. There were products that include CF running on top of K8s and demos showing K8s running within CF. As a first-time attendee, it was amazing to see the open discussion and sharing of ideas. That’s the beauty of open source software and its community. It can evolve to incorporate (read: “curate”) other growing technologies and find the right (read: “opinionated”) way to put it all together. (For Cloud Foundry, it doesn’t just mean Kubernetes either. Look at how the code base has begun incorporating Envoy for another example.) It will all come together in the way that makes the most sense for the user experience. In the end, that’s all that should matter.

It’s All About the Outcomes

Technology is a great enabler. We can’t do technology for technology’s sake. Containers are cool. Machine Learning is fun. Yes, there are some amazing pieces of tech out there. Except it’s not about the tech itself, but rather what it enables for its users. It’s the user experience, the productivity gains, the value, that matters.

Ultimately, technology should be about doing things better, faster, more reliably. That’s the level that all software curation conversations should arrive at: customer outcomes. Whatever the future of Cloud Foundry and Kubernetes brings, we can’t forget the fundamental goal: build software better.

Comparing Public Cloud PaaS Offerings

For custom-built applications, using a Platform-as-a-Service (PaaS) solution is an excellent option. With a PaaS, developers simply focus on writing code and pushing an app. It removes the complexity of having to build and maintain any underlying infrastructure.

In this post, I’m going to try out some of the major PaaS offerings and compare and contrast the experiences. There are two different approaches1 to PaaS adoption:

  • Use a PaaS offered by a public cloud provider. All the big cloud players have a host of services covering the entire software stack. This includes PaaS, and customers may choose to host applications there.
  • Use a third-party PaaS on top of an IaaS provider. The alternative is to use a PaaS that can run on many infrastructure providers. The most notable option here is the Cloud Foundry platform.

I’ll assess three public cloud provider offerings (AWS Elastic Beanstalk, Microsoft Azure App Service, and Google App Engine), and one third-party option (Pivotal Cloud Foundry).

FULL DISCLOSURE: I work for Pivotal. I’ve also worked in the IaaS product space for 3 years. I have more than 10 years of experience working in enterprise IT. I’d like to believe I can remain pragmatic and present a fair view of related technologies.

My goal here is not to determine which option is better. To be clear, I’m not going pick a favorite at the end. I won’t examine the merits of portability or vendor lock-in. Nor am I interested in getting into a public cloud vs. private cloud debate. I’m also not evaluating price or performance.

For now, I’m looking only at the process of creating and deploying an application. I want to show what kind of options each service offers and get a picture of what the experience is like. (I’ll do a followup post to take a look at the Day 2 operations activities like managing and monitoring the apps.)

Writing the Code

First I needed an application to deploy. For this exercise, I built a very simple one. It’s a web service to keep track of movies and television shows that my family and I have watched or want to watch. I call it Friedflix Media Tracker.

I could have used a starter app or someone’s example code. It would have saved me time and headaches. Instead, because it’s been a while since I’ve written Java, I took the opportunity to learn something new. So I wrote a simple REST endpoint using Spring Boot. To get a more real world experience, I decided to use a persistent datastore as well. (I haven’t yet decided if I regret that decision or not.) Since all the public cloud providers offer a MySQL product, that’s what I opted to use for my backend.

To keep things simple, I used the Java Persistence API (JPA) and took advantage of the auto schema creation feature. (More info in the Spring Boot documentation. My code was also heavily influenced by the Entity-User example on the Spring “Accessing data with MySQL” Getting Started Guide.) Obviously, the create setting I used is not something that should be left on for production code. This doesn’t take care of actually creating the database, only the tables within the database. We’ll still have to create a database for the app to connect to.

Deploying the Application

For each PaaS, I’ll use the UI as well as the CLI where possible. I’ll configure the app and database, deploy the code, then finish with a quick manual test to make sure it worked.

AWS Elastic Beanstalk

With Elastic Beanstalk, I used the Build a web app wizard from the main AWS page to get started. This actually takes care of two steps at once. It creates both an environment containing the necessary AWS resources to host our code, and an application construct that may contain many environments. (If we were to create an app without the wizard, we’d create the application first, then the environment. We can choose to create either a web environment, or a worker node for running related processes.)

Back to the wizard. We enter the application name and set the platform to Java (not Tomcat which will expect a war file, not a jar file). We upload the jar file right here as well (ignoring the fact that it asks for war or zip only). We could set up a few more things we need in the Configure more options sections, but we’ll wait and do it later. Click Create application and it spins things up. Once deployed, the app will be available at http://<ENV_NAME>.<ID>.<LOCATION>.elasticbeanstalk.com.

Don’t forget, we need our database too. Amazon’s RDS offering makes this pretty easy. There’s a handy link at the bottom of the Configuration screen in the EB Management Console for our application. We can quickly spin up a MySQL instance with it.

The nice thing when we do it this way is that it creates the necessary Security Group and firewall rule for us so that the app may reach the database. Unfortunately, we still have to log in using a MySQL client to actually create the database, as previously discussed. So we add one more rule to the Security Group to let us log in and create the database. (To log in, we can use any MySQL client we like. To connect, we just need to use the database hostname that’s listed as the Endpoint from the Data Tier area in the app Configuration screen.)

The last thing we have to do is set our environment variables. With EB and RDS, there are environment variables built-in that we could have used (like RDS_DB_NAME, etc.). Instead, we need to set the Spring-specific ones. We do that by clicking the Software Configuration gear and scrolling down to the Environment Properties section. Set the database connection info and also the port, since Elastic Beanstalk will assume port 5000 while Spring Boot defaults to 8080.

After applying the environment properties, EB restarts the app for us. So once it’s up, we’re done!

(We can actually take care of all of the above steps with a few simple CLI commands as well. I included an example shell script in the GitHub repo for reference.)

Impressions

  • EB makes a lot of assumptions, which tends to make things simpler. One example where I had to override a default, though, was with the port number.
  • I’d say the Elastic Beanstalk experience is one of the better ones I’ve had with AWS products in general. It’s pretty seamless and was the lowest friction setup of the three public clouds I tried.
  • Actions tended to take a pretty long time. Setting the environment variables restarted the app, for example. Also, there isn’t really a queue of activity to follow, so it wasn’t always clear what was happening.
  • When using the web interface, a manifest file wasn’t required. Once entering CLI-land, it’s a necessity. Hiding it in the .elasticbeanstalk directory isn’t super user-friendly though. I had to check the docs on that one.
  • I’m saving my Day 2 ops post for another day, but just a brief note on logs. (I ended up needing to view them to see what wasn’t working at first.) While there doesn’t seem to be a native streaming log interface, it wasn’t hard to find the logs. Except it was a tad annoying having to download either the last 100 lines or the whole thing every time. There is a decent CLI option here though (eb logs).

Azure App Service

You may be asking, “why on earth would you deploy a Java application to a Windows server anyway?” Fair question. Microsoft has actually done well at embracing Linux recently. At the end of last year, they announced Azure App Service on Linux, and it went GA just this month. Unfortunately, it doesn’t support Java at this time (only PHP, Ruby, Node.js, and .NET Core). While it’s great news for some apps, it didn’t help me here, so Windows it is.2

First, we create the web app. No code needed at this point. Once up and running, the app will be available at http://<APP-NAME>.azurewebsites.net/.

Once it’s done creating, we click the newly-created web app in the App Services area. We need to go change the Application Settings to enable Java because it’s off by default.

Now we’re ready to upload the code. There are a few different ways to do it using the Deployment Options menu. Azure App Services offers integrations with developer IDEs and source code management tools. I just want to upload my jar file3. The web deploy option that integrates with IDEs does have a CLI (msdeploy.exe) but it’s Windows only. No Mac support. So the best option for me in this case is to use FTP. It wouldn’t generally be my first choice, but at least it’s scriptable. (It also supports FTPS).

To make this work, we have to set up FTP credentials in the Deployment Credentials section.

Then we can get the connection info from the app Overview area.

We’ll use the standard FTP put command (or your favorite FTP client) to upload the jar file to the site/wwwroot directory, along with a manifest file to specify how to run the app.

Now we have to deal with the MySQL database instance. MySQL In App is offered as part of the Azure App Service, but it’s hosted on the same instance as the app and isn’t intended for production use. There is an option from ClearDB we could use. As it turns out, though, Azure recently released a preview version of Azure Database for MySQL. We’ll try it out.

After creating the instance, we have to take care of some things. First, we need to adjust the firewall rules to allow the app instances to reach the database. We do this in the Connection Security settings, but we have to lookup all the outbound IP addresses for the app first. These are found under the Properties section of the app service.

Notice I also add my IP (with the + Add My IP button). This is so I can connect to the instance from my machine and create the actual database, as previously mentioned.

We grab the database server name and login details from the Overview area of the database instance in the Azure portal. Finally, we set the environment variables for connecting to the database.

Now all we have to do is reset the app, and we’re all good.

(Once again, we can take care of all of the above steps with the CLI. I included an example shell script in the GitHub repo for reference.)

Impressions

  • My past experiences with Azure have often felt overwhelming. It seems like there are almost too many options. It’s true here too. Even when first creating the app, it wasn’t clear which “kind” of app to pick. The Azure Portal UI is notably bad. I’m not a fan of the blades and endless scrolling through settings to get what you need. Use the CLI whenever possible.
  • It’s not a perfect method, but one way to judge a user experience is by how much documentation you need to refer to. For what it’s worth, to deploy my Spring Boot application to Azure, I used at least three separate docs. (Here, here, and here.)
  • After the AWS experience and also being familiar with Cloud Foundry, it felt weird not to provide code to get started.
  • I ran into a stupid problem of not setting binary mode when uploading my jar file through FTP. Another reason not to use FTP.
  • Most manifest files these days use YAML because it’s easy to read and pretty easy to write. Having to use XML here wasn’t the greatest.
  • The interface for adding firewall rules is worse here than I’ve seen anywhere else. Even if you opt for the CLI, you still have to start by looking up the IPs for each instance.

Google App Engine

Google App Engine (GAE) is the PaaS offering on the Google Cloud Platform (GCP). For each project in GCP, users can create one app. Each app lives at https://<PROJECT-NAME>.appspot.com. It supports one app per project, but has multiple versions that can each host a certain percentage of traffic. It’s slightly reminiscent of the application/environment construct on AWS EB, but it’s really pretty different from what I’ve seen on other platforms. It’s an interesting way to roll out new code to subsets of users or manage blue-green deployments.

To start, we create the app. Again, no code needed yet.

The CLI offers a simple way to do this as well:

# If not already installed
sudo gcloud components install app-engine-java
# Now create the app
gcloud app create --region us-central

We’ve got our app, now let’s set up the database. We create a MySQL Second Generation database.

The nice thing with Google’s interface is that we can actually create the database in the instance right from the portal (or the CLI). No need to log in to the database with a MySQL client.

Once again, this can all be done with the CLI.
# If not already installed
sudo gcloud components install beta

# Create database instance
gcloud sql instances create friedflix-media-tracker --tier=db-n1-standard-1 --region=us-central1

# Create database
gcloud beta sql databases create friedflix --instance=friedflix-media-tracker

# Get connection info
gcloud beta sql instances describe friedflix-media-tracker | grep connectionName

We’re almost ready to get our code out there. First, we specify our app.yaml manifest file in the src/main/appengine directory. This is the only place where we can enter the environment variables to specify our database details. With GAE, we won’t be just uploading the jar file like with all the other services. There doesn’t seem to be a way to do this, so we’ll take advantage of the appengine plugin for Maven. To do that, we have to add it to our pom.xml file.

<plugin>
  <groupId>com.google.cloud.tools</groupId>
  <artifactId>appengine-maven-plugin</artifactId>
  <version>1.2.1</version>
</plugin>

To connect to our Cloud SQL database instance, we specified a special JDBC connection string in our manifest that makes use of the Google Cloud SDK. The benefit here is that we don’t need to configure any firewall rules or special settings on the database. The downside is we have a few additional dependencies we’ll need to include in pom.xml.

Now we can push our app. We’ll use the Maven plugin we enabled.

./mvnw -DskipTests appengine:deploy

It takes a while, but the command does complete successfully and we’re up and running.

Impressions

  • If you’re not used to the paradigm of one app per project with multiple versions, it’s not entirely clear at first. I deployed a lot of versions inadvertently until I figured out the whole traffic splitting thing.
  • I had used GAE before, but it was a while ago so things were pretty different. Google’s portal UI is usually pretty solid, but it did take me a bit to figure out exactly how things worked here. Again, not providing the code up front felt strange.
  • The CLI is easy to use and I preferred it to the portal in most cases. It was very neat to be able to create the database from the UI or CLI without logging into MySQL. It would have been nice to be able to specify environment variables as well, though using the manifest was fine.
  • Using the Maven plugin was okay, but I would have liked the flexibility to just provide a jar file and call it a day. The only way I could figure out to do that was to use the custom runtime and specify the commands to run it in a Dockerfile. I wanted a more pure PaaS experience, so I didn’t go that route.
  • I ended up needing a fair amount of documentation here too, but it was almost all about connecting to the database. The Cloud SQL dependency stuff was not documented super well. I had to use pieces of documentation from here and here. Even then it required some trial and error to finally get working.
  • The deployment took a pretty long time. The CLI gave little indication of what was happening, but I was able to follow along with the streaming logs in the portal.

Pivotal Cloud Foundry

Pivotal Cloud Foundry (PCF) can run on many cloud IaaS offerings, including AWS, Azure, and GCP, as well as vSphere or OpenStack for on-premise deployments. For this exercise, I will take advantage of the Pivotal Web Services (PWS) offering. PWS is a public, online, managed PCF environment. It comes with an existing marketplace of services like MySQL, RabbitMQ, and Redis. Each app lives at https://<APP-NAME>.cfapps.io/.

While there is a web UI for managing apps and services, a deployment on PCF happens from the cf CLI. Each user of PCF has access to one or more orgs and spaces. These are constructs for multi-tenancy and separation of app environments. We can see (or set) which endpoint, org, and space our CLI will connect to with the cf target command. Mine is set to target my space in PWS.

api endpoint: https://api.run.pivotal.io
api version: 2.94.0
user: [username]
org: bfriedman-org
space: development

Everything starts with the cf push command. We can choose to specify required parameters using the command options, or we can use a manifest file. For now, we’ll just use the -p option to target our jar file.

From the top-level of the source code directory:

cf push friedflix-media-tracker -p target/media-tracker-0.0.1.jar --no-start

This will push our app, but we’ve specified that we don’t want to start it yet. That’s because we still need to create a database and set our environment variables. We can do that from the UI.

To create a database service instance, we leverage the Marketplace:

We’ll use the ClearDB MySQL offering and choose the free Spark DB plan for now.

We name the instance and we can even bind it to our app from here.

Now we can go to our app settings, grab the database service connection info, and set our environment variables:

UPDATE: Turns out we don’t even have to do this step at all! Spring Boot magically detects the database automatically (by looking at existing Cloud Foundry environment variable VCAP_SERVICES) and autowires the configuration for us at startup. Even easier than I thought!

We start our app and we’re good to go.

While the UI is pretty easy, let’s take a quick look at the power of the CLI, especially with a manifest. Using the manifest file, we can specify the jar file path and bind our database service. We can also set our environment variables without even knowing the values. (UPDATE: We actually don’t even have to do that because, same as above, Spring Boot figures it out from the bound service alone. I removed the environment variables from the manifest and it still works!) We reference Cloud Foundry’s existing properties for the bound services:

---
applications:
- name: friedflix-media-tracker
  path: target/media-tracker-0.0.1.jar
  buildpack: java_buildpack
  services:
  - friedflix-db

Now with the manifest file in the main directory, we simply create the database service and push the app:

cf create-service cleardb spark friedflix-db
cf push

Before too long, the app is up and running.

Impressions

  • The web UI is a bit limited, but that also means it’s very simple to use. There’s real power in the CLI, but the UI is a nice addition for some things.
  • The elegance of creating and binding the database service wasn’t matched on another platform. In fact, the act of binding creates the database for you, so it really did make it easier than any of the other platforms.
  • Setting the environment variables to reference the service properties is awesome. Only the Google SQL connector was close to the ease of deployment, but it required lots of code dependencies.
  • Granted, I’ve had experience using PCF before and all the other platforms were basically new to me. Still, I did have to reference documentation a few times to look up manifest file values and things. Even so, this took me the least amount of time of all the platforms and I ran into the fewest problems starting the app.

Wrap Up

Each platform had its strengths and weaknesses, as we’ve seen. All the platforms I looked at here are opinionated to some degree. They all make some assumptions about the application and desired configurations. Yet they all let the developer provide customizations and specific settings.

Pivotal Cloud Foundry seemed to be the most opinionated platform of the bunch. This made it the most frictionless for getting an app deployed. The breadth of services offered by the big cloud providers is very nice though, depending on what you need. This was a pretty simple example, but each platform might make sense for a given workload.

I’ve also only explored the deployment process here. There is a lot more to discover around Day 2 operations. Once it’s out there, we still need to manage and monitor our app. How do we scale it? How do we do health management? Observability? I’ll take a look at the options each platform provides in a followup post. Stay tuned!

Footnotes
  1. I suppose you could consider the third approach of using a PaaS-only provider like Heroku. I didn’t consider that here.
  2. A better option might have been to use the Azure Container Service. Or maybe I should have chosen to write a Node.js app instead. Either way, that’s a separate blog post for another day.
  3. I tried to avoid using IDE or source code repository integrations for this exercise. The right thing to do would be to write automated tests and wire up a CI/CD pipeline to push the code to the platform. (Since I’m not a real developer, I did not write tests, although Spring does make that pretty easy.) Yet another separate blog post for another day.

A Hybrid Career

In a sea of overloaded terms, it seems to me that the word “hybrid” is perhaps one of the most often used. Whenever we want to convey that we are combining two [or more?] different elements into one, we stick the word “hybrid” in front and call it a day. It all started with genetic cross-breeding – plants and animals – in the biology world. But then the vehicular world joined the fun – hybrid cars and hybrid bicycles. In the last few years I haven’t been able to stop hearing about hybrid clouds or hybrid IT. And not to be outdone, the financial industry is in on the trend – you can of course invest in hybrid securities. (There are even hybrid golf clubs, in case you can’t decide between that 7 iron and your 3 wood.)

As I reflect on my professional past, and in a continued effort to overload the term, I sometimes find myself describing my career in terms of hybrid jobs. (Indeed, I am not the first one to coin this term.) I like to think of myself as a little left-brained and a little right-brained; a little technical, a little business; a computer geek with people skills. I am definitely most happy when I have a job that lets me build things, write some code, and potentially get into the weeds on technical stuff, while also allowing for me to analyze, synthesize, collaborate, and share information with a wide array of audiences from sales people to customers to engineers. I like sitting in that nice spot inside the middle of a venn diagram.

When I last changed jobs after spending so long working in various areas of Enterprise IT, I was very lucky to have found a position that seemed to combine my skills and interests into something that felt like a perfect fit. Even more than the job definition itself, I was able to hybridize my career as I moved from a monolithically slow enterprise IT world to a lean and agile product team in an organization with a startup sensibility.

The growth and knowledge I gained during my tenure there has been invaluable, but the time has come to once again expand on the hybridization of my career. So today, I’m very happy to report that I’ve joined Pivotal as a Product Marketing Director.

There’s something about Pivotal’s mission – transform how the world builds software – that appeals to all parts of me. I’ve lived the problem from both sides. When I worked in enterprise IT, we were constantly challenged by everything related to the development and deployment of software. It just wasn’t a core competency of the company, and things often took too long and required too many people with too many different skill sets. On the other hand, even in a product development organization where building and shipping software is supposed to be the core competency, it was still challenging dealing with the complexities of engineering and large teams of developers who have various areas of expertise and experience.

No matter what kind of organization you’re in, building software is a difficult thing to do, especially as you constantly face the rapidly changing technology [and business!] landscape. Except nowadays, every company is a software company. It’s not just the Silicon Valley startups who need it. Every company these days undoubtedly has a lot of software – whether internal or customer-facing (or both) – to build and manage.

That’s what makes Pivotal’s mission so incredibly intriguing. Companies (perhaps the biggest ones especially) need to rethink and revisit how they design, develop, and deploy software. In today’s arena, that often means they need to be more cloud-native. But it’s bigger than one technology or a single tool – it’s truly about transformation. That’s why I really love how Pivotal tackles it not just with a strong portfolio of products (from the flagship Pivotal Cloud Foundry, to the open source Spring framework, to the more widely known Pivotal Tracker, and even a Big Data Suite), but also through Pivotal Labs, where they partner directly with customers and guide them through the change.

As for me, I’m particularly fired up about that one word in my new title that I haven’t fully experienced in my career yet – marketing. I’m thrilled to be able to work with a truly incredible group of professionals as I discover how to sprinkle that bit of marketing in along with my passion for the technology and my enthusiasm for communicating about it. I’m eager to get started. Let’s do this!

5 Things I’d Tell My Enterprise IT Self

It was exactly one year ago today that I became a Product Owner (née Manager) at CenturyLink Cloud, and as a colleague of mine likes to point out, that’s a really long time in “cloud years.” As I reflect back on the experience I’ve had so far, it feels good to know that the me of today knows a whole lot more than the me of one year ago. Just as a college student wishes he could go back in time and educate his high school self, I now find myself thinking about the helpful things I could share with my enterprise IT self and all my former colleagues. So with that BuzzFeed-esque premise, here are some things I’d let the trapped-in-IT-purgatory version of myself know about how life could be.

You Don’t Know The Cloud

Everyone I worked with in IT used to talk about “the cloud” as if they knew what it was and had used it on various projects. Sure, there were plenty of times that a vendor would sell services branded as “cloud” to attach some buzz to what was really more analogous to a traditional application service provider or legacy hosting model. In reality, almost nobody in IT actually understood or took advantage of cloud for any practical purpose.

My favorite definition to use now when describing the cloud is Dave Nielsen’s O.S.S.M. acronym: on-demand, self-service, scalable, measurable. Before, all the cloud really was to me was a series of “as-a-Services” — Infrastructure-as-a-Service (Iaas), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS) — and we seemed most comfortable with SaaS (a familiar story for many enterprises). I complained plenty about how long it took to get a server stood up and I thought the move the company was making to colocation might begin to solve things. I didn’t recognize how much IaaS would have helped with that, or even more how the power of PaaS may have eliminated that need altogether.

The barriers for entry to the cloud were the usual ones — security concerns about data not being on premise, the question of whether our regulated/qualified systems could live on cloud, some perceived lack of control — I’ve heard them all by now. Except they aren’t barriers, they are just challenges. Tides are turning and enterprises are embracing cloud, from public to private to hybrid cloud as well. It’s exciting to be working at a cloud company right now.

Lesson: Have your IT organization seriously explore a cloud migration. Consider PaaS along with IaaS. Hybrid cloud may also be the way to go. Don’t be discouraged by the challenges — there are ways to work through them.

Your Project Management Methodology Is Broken

Most of the projects I worked on in my former life lasted more than a year and yielded little to no value for the business. By the time the original requirements were being delivered, they had already changed and probably weren’t even right in the first place. The project methodology we used, RUP (Rational Unified Process), was supposed to handle this problem with iterations. In practice though, this was mostly lip service as the project invariably fell to using a more traditional Waterfall method.

On the team I work on now, we use Agile. There is a wealth of information to be found elsewhere online about what Agile methodology is and how it was born. There are many forms of Agile such as Scrum or eXtreme Programming to name just two. One of the key elements of Agile is its flexibility in allowing for rapid respond to change. It’s about shorter development cycles (called “sprints”) and it encourages early delivery and continuous improvement. We do 21-day sprints, though some teams have even shorter iterations (1-2 weeks) depending on what makes sense for a given product. Each sprint is focused on the progressive refinement of new features — delivering some level of value with each release, starting with the mvp Minimum Viable Product (MVP). This creates a constant feedback loop and allows the team to fail fast and course-correct quickly as needed. Every morning there is a “standup” meeting where the whole team stands up and talks about what they are working on. At the end of each cycle we have a retrospective to discuss what went well, what didn’t, and what actions we can take to improve the process.

I can already hear some former colleagues pooh-poohing these ideas with utterances of “that doesn’t work in a big enterprise environment” or “what about documentation and compliance?” or “it won’t fly with the way we do budgeting.” Not true. It can work. One of our engineering leaders likes to say something like, “This is the best way we know how to develop software today. If we find a better way tomorrow, we’ll do it that way instead.” Find a better way and make it work for you.

Lesson: Use Agile. Forget about “services” and “projects” and build products. Fail fast. Ensure feedback loops. Embrace change!

(It should be noted that some things I’ve read — mostly by IBM, the purveyor of RUP — are quick to point out that RUP is a framework while Agile is a software development process, that RUP and Agile can co-exist, or that RUP could even be considered Agile (because it uses iterations). All I can add to the conversation is that this has not been my experience and I have seen more success by taking a truly Agile approach. Your mileage may vary.)

Learn About DevOps and Spread the Word

For a few months at my old company, I was on a small team tasked with delivering SharePoint. It started out experimentally and wasn’t widely used so we were able to fly under the radar a bit and follow our own processes. We did pair programming, frequent releases, progressive refinement, and just the right amount of documentation. Looking back now, we were exhibiting certain Agile characteristics without even knowing it. On top of that, we were responsible for both building and running the whole stack and we embraced automation wherever possible. (I have fond memories of “Redeployer” — our ASCII-art-infused command line tool.) At the time, I’d never heard of DevOps, but I now know that these are some of the key characteristics of DevOps organizations.

One of my first assignments in my new job was to read The Phoenix Project and it was a completely eye-opening experience. It’s a great way to be introduced to DevOps if you’re unfamiliar with it, as is Richard Seroter’s Pluralsight course, DevOps: The Big Picture. Just like with Agile, the resources you can find online about DevOps are endless and will all do a better job defining it than I could. devopsSticking with the theme of four letter acronym definitions, John Willis coined C.A.M.S. to describe DevOps: culture, automation, measurement, sharing. In a way, it’s kind of an extension of Agile for the Operations world…but it’s really more than that. To me, it’s about the idea that everyone is on the same team, working together towards a common goal. No more “us vs. them” mentality.

Unfortunately, our small, Agile-ish, DevOps-ish SharePoint team did not last long. It got sucked into the enterprise IT vortex never to be productive again. For an organization to truly adopt DevOps it must completely change the way it thinks, starting at the top with upper-level management and cascading all the way down to the boots on the ground. There’s no tool for doing DevOps, but there are DevOps-y tools that have gained popularity like Chef (infrastructure as code), Docker (containers), and a bevy of continuous integration (CI) tools.

Lesson: You probably can’t change your organization to magically embrace DevOps, but you should at least try to adopt whatever DevOps principles you can within your own team…and maybe you should slip a copy of The Phoenix Project under the door of every executive at the company and hope they get the DevOps bug.

There Is Database Life Outside Of SQL

One of my favorite computer science courses in college was the relational databases class. Throughout my career in IT, particularly during my days supporting the Finance organization, no skill served me better than my knack for writing complex SQL queries. So the first time I heard about “NoSQL” databases, my brain wasn’t ready to comprehend what that meant. Nobody I worked with was ready either. Every application I worked with in enterprise IT had an RDBMS backend. The only “choice” was whether to use SQL Server or Oracle.

I realize this is still largely the case for many organizations. I see plenty of customers now looking for ways to put their critical relational database workloads on the cloud. Still, NoSQL and Big Data are some of the biggest buzz words around, and while enterprises have been relatively slow to adopt them, this could be the year they really start to pick up. Admittedly, my experience with NoSQL databases is still relatively limited, but becoming familiar with some of the different types (like key-value stores or document stores) and many of the primary use cases (distributed, horizontal scalability, extremely large data volume, schemaless data structures) has me thinking about data storage in a way I never used to.

Lesson: Relational databases are not the only game in town. Sometimes a relational database is the right answer, but sometimes it isn’t. Look for the right situation to consider one of the many NoSQL alternatives that are available. (Shameless Plug: Check out CenturyLink’s recent acquisition, Orchestrate.io.)

Actually Build For Scale

Towards the end of an IT project, just before go-live, we used to retroactively write a Non-Functional Requirements (NFR) document (because it was a mandatory artifact) and usually it would contain made up numbers about performance or load requirements, most of which could never be tested or actually met in the real world. We always tried to scale the app, usually by adding more servers and a load balancer. Of course this was never enough because we were a global company and we put most of our apps in a single location in the United States. (Plus, we usually had a single database server behind the app servers anyway…see above.)

Enterprise applications don’t have to be on par with Facebook or Google, but large organizations still need to build apps that scale for both heavy load as well as for a global distribution of users. Just about every application I built during my IT tenure used a basic three-tier architecture and a simple load balancer. In today’s modern environment with the convergence of enterprise and consumer apps — users expect things to work just like they do on their web browser at home and on their smartphones and tablets — this just won’t cut it anymore. Since leaving the one-track mind of the enterprise, I’m just becoming familiar with some of the emerging architectures (twelve-factor apps,  microservices, containers) that scale better and are more suitable for running in a cloud environment.

Lesson: Applications should be designed for scale from the start. Global accessibility and consistent performance across geographies should not be an afterthought. If the tool you select or build does not support your scalability requirements, it will be a failure regardless of how well it works. Consider a more modern architecture and leave the three-tier apps behind.


As Bob Dylan wrote, “the times they are a-changin'” — and one thing I’m glad about is that in this past year I’ve finally begun catching up with the times. I know big companies usually have large enterprise IT organizations that always seem to have a stigma for being behind the times. Well, here’s another quote for them from German author Eckhart Tolle — “awareness is the greatest agent for change.” If you’re trapped in an organization like the one I was in, don’t wait for your future self to travel back in time and educate you. Educate yourself now and start changing the way you do IT.

Being a Product Manager

It’s been three weeks since I began my Product Manager position at CenturyLink Cloud, and it’s been a great experience so far. I’ve learned so much already and am really enjoying my continuing journey from the enterprise IT world into the cloud computing space.

The most frequent question I’ve gotten from all of my family and friends since I took this job has been, “So…what do you do?” Of course, when I was working in IT at my previous job, my answer was often just, “I work with computers.” I imagine they pictured me helping people fix their computer problems like Jimmy Fallon’s Nick Burns character from SNL. With this new job, it seems to have become even harder to describe what it is that I do as it seems people often have no idea what the “cloud” really is or what a product manager does. In fact, even when I accepted the position, I had only a rough idea of how exactly I’d be spending my time on a daily basis. Thankfully, it hasn’t taken me too long to figure out. While I was up in Seattle meeting the team last week, we had a very productive discussion about precisely this topic.

The Bobs

As product managers, what exactly do we need to know and what are we actually responsible for doing? First, it’s important to understand what we need to know to be an effective product manager, and we learned that there are three key areas of knowledge: product, market, and accounts.

What a Product Manager Knows

Product. Of course, product managers need to know all about their product. I mean, it’s in their title — if we don’t know the details of the product we are managing, we can’t rightfully be called a product manager. This means we have to be intimately familiar with all of the features of the product, including how they work and how to use them, as well as why they were designed a particular way. It also means we need to have some sense of the product roadmap, ultimately being aware of what features are on the near-term horizon as well as at least a broad understanding of where the product is headed over the long term. In the case of our team, this includes all products in our portfolio (though I’ve heard some teams have product managers assigned to individual features or to one specific product within a portfolio).

Market. In order to help us develop our product roadmap and also better understand how our features compare with those of our competitors, we have to stay aware of what’s out there in market, what the industry trends are, and where there are gaps, both in our product and in the market in general. For our team, this means keeping up with all the news that’s out there about cloud computing — competitor press releases, thought leaders blogs, research articles, white papers, presentations, anything that will let us gain insight into who is doing what with cloud services and where the technology is headed. This means reading…a lot. I’ve already discovered that consuming so much content and determining what is important to retain can be pretty overwhelming. Luckily, I’ve found that using services like Pocket, Flipboard, Feedly, and Evernote really help me to track lots of information, glean what’s important, and save it for reference.

Accounts. While it’s helpful to see what our competitors and others in the market are doing, there is perhaps nothing more valuable than understanding what our customers are doing with our product(s). Keeping up with the end users is an important part of a product manager’s job. Having regular calls or meetings and just maintaining a positive and open relationship with users is a great way to do this. While end users will likely have a relationship and regular interactions with a sales representative or account manager, making sure their channels of communication are open with the product management team as well can make a big difference here. I think this is probably the most challenging of the three knowledge areas to keep up with because it requires such active participation and frequent communication with end users.

Okay, so a product manager has to know a lot…now what do we do with all of this information?

What a Product Manager Does

In general, a product manager does a lot of information sharing. All of that knowledge we have about the product, market, and accounts, we have to share with various audiences who are interested and need the information to do their jobs. This includes internal evangelism where we need to help others in our organization understand what our products are, how they work, how they are evolving, and why we are (or aren’t) building a particular feature. It also includes public engagement as well — talking about our product, or even our industry in general, on social media, in publications, and at conferences. It’s about promoting the product both within the company as well as to the broader community, and since we know the product better than anybody else, what its place is in the market, and how our customers are using it, we are often in the best position to do this.

What I’ve found most interesting about the product manager role is that it seems to sit right in the middle of so many key functions within an organization. In the case of our team, we are part of Engineering and already work closely with the developers, but we also have to interface very frequently with Operations, Marketing, Sales, and even the end users. Ultimately, all of these various groups are our customers. In order to help gain all of that knowledge we need, we need to interact with all of them and keep them engaged and as happy as possible. This can prove to be a difficult task, of course, given all of the competing priorities. 

Given that we sit in Engineering, perhaps our most important job functions include backlog management and sprint planning. It is the product management team who is primarily responsible for determining if we are going to build a feature, and when we are going to build it. In other words, it doesn’t get into the product unless we say it does. Of course, we look to all of our customers to help us make the determination, but the decision is essentially ours.This may result in some healthy debate as part of the planning process, and so it helps to be armed with facts (what we know) to support these decisions. If a developer is curious about why we have to build a feature, it helps if we can say something like “all of our competitors are doing it” or “our top five clients asked for it.” Conversely, if an end user asks why we don’t have a feature, it’s nice to be able to say “we are working to get it into the product soon” or “we will never be able to support that because it doesn’t fit with our vision of the product” or even “have you thought of using this other feature instead to accomplish the same thing?” Sometimes we may even do some feature prototyping first to help understand what to build and how it might work.

Along with our engineering team, we need to support our sales and marketing folks as well. We may do some more thorough competitive analysis, not only to help us determine what to put into the product, but also to help them better understand our specific value proposition or what the differences are among the feature sets in the market. In the case of our team, we are also tasked with product definition as well as potentially helping to determine product pricing. This means that we have to work with Finance, Operations, Engineering, and others to find out what it will take to add products to our portfolio so we can figure out the specific details of what the product will look like when it goes to market (i.e. what are specifications, prices, features, value proposition, etc.) Additionally, we may be called upon to help with sales support if there is a need for some deeper technical knowledge to help win over a potential customer. It also falls upon the product managers to take responsibility for analyst briefings and make sure they have all the information they need to accurately reflect the product offerings in their research papers and market analysis. (CenturyLink Cloud was recently recognized by Gartner in the Magic Quadrant for Infrastructure as a Service.)

Finally, let’s not forget that there is also the need to continually engage with the end users, not only so we can gain insight into how they are using the product and what features they are interested in, but also to keep them informed on what’s coming, as well as helping them get as much as they possibly can out of the product. This can be achieved by writing release notes, knowledge base articles, and keeping them up to date with customer briefings.

One thing I’ve heard from multiple people is that being a product manager is hard. I’m definitely starting to see why, as there is so much to know and so many decisions to make that have a real impact on all of our customers. I’m up for the challenge, though, and excited to continue to learn and develop all the skills and knowledge necessary to be a great product manager and contributing member of our product team.

11 Years Later

When I was hunting for my first career job as I was winding down my college years, I remember suiting up (though this was a couple of years before How I Met Your Mother aired, so that term may not have been around yet) and going on some interviews offered at the Cal Poly career center. I got through to the second round for two of them. One was for St. Jude Medical in Sylmar (where a few of my Cal Poly Engineering brethren ended up working for a time), and the other was for Amgen in my hometown of Newbury Park.

The entirety of my experience with Amgen at the time had been the lectures that I attended at the conference center there to earn extra credit for my 9th grade biology class. It seemed strange to even consider working there. I figured with my computer science degree, I’d end up in the Bay Area working for some major software development company, or maybe I would join a small startup and get to work with some really innovative, cutting-edge technology or something. I never imagined I’d take a job working in information technology at a large biotech company. Let alone basically going back home to do it.

And yet, as hard as I tried to stay away, there was something appealing about being close to my family, having the kind of benefits that Amgen offered, and still getting to work with technology in some respect. Sure, I wouldn’t be flexing my programming muscles as much as I would at a Microsoft or a Google, but it would still be a great opportunity to learn and grow. It’s not like I was going to be there forever.

Well, I wasn’t…but it sure felt like it. Today will be my last day at Amgen after nearly eleven years, six positions, eight bosses, and only three previously used laptops. On Monday, I start a new job at CenturyLink Cloud as Product Manager. Though based in Seattle, I will be working remotely from a home office and traveling up there occasionally to check in and be with the team.

This is a pretty big change for me, both from a career and also a lifestyle perspective. It honestly wasn’t even something that I was actively looking for at first. But when presented with the opportunity, it became increasingly clear that it was going to be virtually impossible to pass it up. Though I’ve been very happy at Amgen, particularly in my latest role there, I have watched the company over the past few years and seen it progressively enter a place where technical skills aren’t as valued as they used to be and the thirst for innovation is hard to come by. I’ve successfully navigated a number of job changes there that all helped me grow and learn so much, and I’m extremely grateful for that. But I like to be able to see the next job that I’m going to take, and I just started having trouble finding it at Amgen.

Thus, when the possibility of joining a high-performance team in a more tech-focused space was pitched to me, hard as my risk-averse self tried to ignore it and stay in the comfort zone that is Amgen, my desire and thirst for something new and different ultimately won out…and I could not be more excited to get started. The real challenge is going to be trying to explain to my daughter that Daddy is still “at work” even though he’s physically “at home” also. That, and getting work done while hearing Frozen playing in the other room. But I’m looking forward to it.