⭐ CloudProfs Issue #12: Google Cloud Next and serverless!

Welcome! This is the browser-based version of CloudProfs, sent to email subscribers on October 15 2021.

Want to let me know what you loved in this week’s issue? Or what you hated? Or what you were completely indifferent to? Fill out this super-quick reader survey and you will receive a free Packt eBook! Or email the editor here.


Contents

What’s Been Said and Done in Cloud This Week
Introducing… TriggerMesh
Google Cloud Next Tutorial: What’s New in Serverless?
Google Cloud Next: Papa John’s Case Study: A Slice of the Action
Secret Knowledge & Hidden Gems


What’s Been Said and Done in Cloud This Week

Google Cloud Next came and went this week, with the event being virtual for the second year in a row. Among the most important product announcements was the launch of Google Distributed Cloud. Built on application modernisation platform Anthos, the offering further emphasises the hyperscale cloud providers’ ambitions to extend its infrastructure to wherever a user’s workload resides, be it the cloud, edge, or on-premises. “Using Google Distributed Cloud, customers can migrate or modernize applications and process data locally with Google Cloud services, including databases, machine learning, data analytics and container management,” the company wrote in a blog post.

Ubuntu 21.10 has landed, with a renewed promise to be ‘cloud-native from the edge to the mainframe.’ The iteration, named Impish Indri, has containerised images available on Docker Hub and Amazon ECR Public Registry. The latest LTS Docker Images from Canonical include Grafana, Prometheus and NGINX, with Apache Cassandra v4 support a new addition. The iteration is the final interim release before the next Ubuntu Long Term Support (LTS) due to be released in April 2022.

A study of telecom network executives by DriveNets has found that operators recognize the ‘inevitability’ of disaggregated cloud-native networks. The report added that telco execs – perhaps begrudgingly – accept that hyperscaler cloud architecture has superior economics. Telecom operators view hyperscalers in one of three ways, per the report: as business partners who can provide IaaS for operators’ IT and operational payloads; as models from which operators can learn and adopt business models; or as threats who commoditize operators’ core businesses. Optimal scalability, faster innovation, and resilience to change were seen as the three biggest drivers for pursuing ‘hyperscale economics.’ The full report can be read here (email address required).


Introducing… TriggerMesh

TriggerMesh, a cloud-native integration platform provider built on top of Kubernetes, has announced it has become open source.

The company prefers to use the term ‘serviceful’, rather than serverless, to describe its offering. Sebastien Goasguen, co-founder of Triggermesh, spoke at KubeCon NA last year on this topic, and in a blog post outlined a situation which may resonate with cloud developers. “Developers are modernizing their applications by taking advantage of cloud services wherever they are and from whoever the provider is,” wrote Goasguen. “In addition, this modernization takes advantage of function-based offerings known as Function as a Service (FaaS) that auto-scale and provide a finer billing mechanism.

“We think that by focusing too much on functions, we often lose the fact that serverless offerings like AWS Lambda are actually about events: ingesting, storing, emitting and processing events,” added Goasguen.

This meaning of ‘serviceful’ can therefore be interpreted as ‘integrating services together.’ But to avoid any spaghetti code or systems, TriggerMesh instituted its Cloud Native Integration Platform.

Some example use cases for the open source platform include:

  • Build a data pipeline to fill your data lake, store all Git commits or all Salesforce events in an ElasticSearch cluster
  • Grab all logs from Azure and store them in Splunk after having filtered and annotated them
  • Grab metrics from anywhere and store them in Datadog
  • Run AWS comprehend analysis on objects stored in Google Storage
  • Manage Kafka connectors whether you use AWS MSK or Confluent Cloud – ‘no need for a Kafka connect cluster anymore’

In other words, while the similarity to a product such as Lambda is clear, the goal of TriggerMesh is to not be tied to any single cloud provider.

“The TriggerMesh approach to integration is very similar to the way infrastructure as code solutions such as Ansible, Chef, Hashicorp and Puppet use to deploy infrastructure by DevOps teams,” the company noted in a statement. “The platform allows cloud operators and DevOps practitioners to deploy integrations as codeTM, which dramatically accelerates time to value and improves flexibility compared with typical integration platform as a service (IPaaS) solutions.”

Writing for ZDNet, Steven J. Vaughan-Nichols outlines TriggerMesh’s commitment to defining a path for cross-cloud integration for all devs. “Besides the program itself… [TriggerMesh] provides the sources you need to work with most AWS services like SQS, S3, and Kinesis, sources for Google Cloud Storage, Pub/Sub and Cloud Audit Logs,” wrote Vaughan-Nichols. “In addition, it’s also releasing Azure sources for Azure Blob Storage and Azure Audit logs.”

The software is available under the Apache Software License 2.0.

You can take a look at the GitHub repository here.


Google Cloud Next Tutorial: What’s New in Serverless?

Steren Giannini, senior product manager at Google Cloud, presented the various improvements, enhancements and modifications to Google Cloud’s serverless product portfolio: namely Cloud Functions and Cloud Run. The session was at Google Cloud Next this week, and you can view it here (unlisted link).

Google Cloud Functions enables users to ‘treat all Google and third-party cloud services as building blocks. Connect and extend them with code, and rapidly move from concept to production with end-to-end solutions and complex workflows.’

One of the key changes in 2021 has been new programming languages for Cloud Functions: Ruby, .NET Core, and PHP alongside updating runtimes with Python 3.0, Node.js 14, and 16 in preview.

This is an example of a simple HTTP cloud function for Webhook/HTTP use cases in idiomatic Ruby:

require “functions_framework”

FunctionsFramework.http “hello_http” do |request|
  “Hello, world!\n”
end

This is an example of a simple HTTP function for Webhook/HTTP use cases in .NET Core:

using Google.Cloud.Functions.Framework;
using Microsoft.AspNetCore.Http;
using System.Threading.Tasks;

namespace HelloWorld
{
    public class Function : IHttpFunction
    {
        public async Task HandleAsync(HttpContext context)
        {
            await context.Response.WriteAsync(“Hello World!”);
        }
    }
}

This is an example of a simple HTTP function use cases in PHP:

use Psr\Http\Message\ServerRequestInterface;

function helloHttp(ServerRequestInterface $request): string
{
    $queryString = $request->getQueryParams();
    $name = $queryString[‘name’] ?? $name;

    return sprintf(‘Hello, %s!’, $name);
}

Another key change has been adding min instances to Cloud Functions. This helps alleviate the problem of cold starts, which users have noted is a problem with Function as a Service (FaaS). What is a cold start? “When your function receives no traffic, its number of function instances are scaled to 0. But then when traffic arrives, Cloud Functions will scale from 0 to 1,” explains Giannini. The time it takes for the function to start is what is called a cold start.

With min instances, users can “simply define a value that Cloud Functions will scale down to, keeping one or more instances warm, so that when the next request comes in, there are already some warm instances to process them,” said Giannini. When not in use, these ‘warm’ instances are charged for memory, and for CPU at 10% of the price.

To use min instances, you can do either a command line flag or user interface inputs:

$ gcloud beta functions deploy my-function – -min-instances 1

Google Cloud Run is a product which enables users to develop and deploy highly scalable containerized applications on a fully managed serverless platform. Giannini outlined the improvements made to the Cloud Run developer experience in 2021, focused on three areas: local development, easier deployment from local source, and greater observability.

You can run your Cloud Run service in a local emulator, which is available in the G-Cloud command line, in the Cloud Code for VSCode and IntelJ, as well as the Cloud Shell Editor IDE.

The simple command to start the local development environment is:

$ gcloud beta code dev

“You start a local environment that will emulate the characteristics of Cloud Run with the CPU, memory allocation, [and] environment variables that you have defined,” explained Giannini. “It will watch your local source code for changes. When they happen, we will rebuild them into a container and restart the local server. It is quite handy for a fast development loop, instead of deploying to the cloud for testing your changes.”

Once you have developed your service, you will want to deploy it. Giannini noted the added support for deploying source code to Cloud Run with:

$ gcloud run deploy

On the observability side, an instance count metric has been added, which simply shows the number of active and idle instances for a given Cloud Run service. But the most interesting is out-of-the-box error reporting. “So without any more configuration, that means that if your Cloud Run container instance is out of memory, or if Cloud Run is not able to scale because it is reaching its maximum instance limit, those errors, which are present in your logs, will now ablso be aggregated in cloud error reporting and displayed in a very actionable way in the Cloud Run user interface,” said Giannini.

Looking to the wider product suite, Giannini added that Eventarc and Workflows were two products which pair very well with Cloud Functions and Cloud Run. Eventarc is an orchestration product which enables users to asynchronously deliver events from Google services, SaaS and own apps to serverless products. Workflows is a product which looks to orchestrate and automate Google Cloud and HTTP-based API services with a workflow that the user defines.


Google Cloud Next: Papa John’s Case Study: A Slice of the Action

At Google Cloud Next this week Sarika Attal (SA), VP enterprise architecture and technology services at Papa John’s, spoke with Sanjay Singh (SS), SVP cloud native services and global head of Google Ecosystem Business Unit, HCL, on the pizza brand’s data centre transformation (DCT) journey.

Below are edited remarks:

SS: What made you want to go on the cloud journey and redesign the architecture of your entire organization?

SA: If you look at Papa John’s from a technology perspective… 70% of our business is driven by digital experiences, and we are looking at multiple aspects of the other areas of the business, and how we can transform those as well. When we talk about infrastructure, it’s like the foundation of your home. It’s the foundation of all the digital aspects of every part of technology that we support.

We had a ton of PaaS workloads with Google, starting a few years ago. I’m very proud to share that we have a nice set in cloud-native, data analytics infrastructure. It has made us very agile, [created] a customer-centric data-driven mindset. There is a different pace we can move with these foundational workloads in Google Cloud.

We realised that there is a lot of diversity in terms of the technologies we use. It’s not just the tools and technologies but also the processes, the way we do things on-prem versus cloud. That heterogeneity wasslowing us down. We took this approach to do a lift and shift with what we have in the data centre, and think about it as helping us get faster on all the tracks of the transformation. We feel that the infrastructure layer and particularly the things running in the data centre today have been adding a lot of business value to Papa John’s. How do we take it to the next level by really modernising the foundation and taking it to the cloud?

SS: What are the various solution components and approaches that HCL and Google Cloud helped you put together?

SA: I believe in looking at cloud as how, and not necessarily where. That’s the cloud-native mindset that we have here. In a similar fashion, this decision actually helped us formalise our cloud strategy. It forced us to think and decide various components. It also helped us clean our asset inventory; we were fortunate enough to be able to decommission a ton of stuff, as we knew there was no point taking these apps to the cloud.

It also brought a lot of goodness in terms of making the core infrastructure components as part of our architecture governance process. We were really good at application architectures, data architectures, but for some reason the infrastructure side of things were not folded in as much. This DCT helped us tighten that.

SS: How did you drive a culture of innovation?

SA: We strongly believe at Papa John’s that technology innovation does not always happen at the top, or in a particular layer at an organization. We feel it happens at a ground level, the people doing the work, they’re closest to the work. Same with our partners – we believe in innovation, ideas coming from everywhere.

We have several cadences set up and a platform where we allow everyone to contribute their ideas, and actually help them prioritise in the portfolio if it’s worth it with a very cross-functional and collaborative manner. Everyone can understand the benefits of an idea and know that there is value in implementing something.

With this type of a transformation project, it impacts a lot of teams, and a broad set of apps. So it’s important to have that sponsorship from the top. Typically, we are a very collaborative organisation, it’s not necessary to have that top-down approach on everything, but with this being such a transformation journey, you want to make sure that conversations are being had at evey layer of the organization and people are prepared for the complex change that you’re driving.

You can watch the full session here.


Secret Knowledge & Hidden Gems!

A cool selection of recent (or recently updated) cloud repositories and tools across vendors and languages. Got a tip or are you working on a project you want the world to know about? Email the editor today!

BRAND NEW! m3o: The open source public cloud platform. An AWS alternative for the next generation of developers. Latest version: v0.1.0 (Oct 15). Primary language: Go (100%).

golang-samples: Sample apps and code written for Google Cloud in the Go programming language. Primary language: Go (unsurprisingly) (97.3%)

localstack: A fully functional local AWS cloud stack. Develop and test your cloud and serverless apps offline! Latest version: v0.12.18 (Sep 23). Primary language: Python (98.4%).

monokle: Your friendly desktop UI for managing K8s manifests. Quickly get a high level view of your manifests, their contained resources, and relationships. Latest version: v1.2.2 (Oct 8). Primary language: TypeScript (97.6%)

opta: Infrastructure as code where you work with high-level constructs instead of getting lost in low-level cloud configuration. Latest version: v0.15.0 (Oct 11). Primary language: Python (75.3%)

Leave a Reply

Your email address will not be published. Required fields are marked *