For ticket holders

If you have purchased a ticket you can start building your personal schedule, get familiarised with the venue and see who else is attending the event right away. The password to unlock this feature can be found in the ticket confirmation email.

Create your schedule and see who else is attending

Program


Registration

45min

Training Session TBA

210min

Training session TBA

210min

Lunch

60min

Training session TBA

210min

Training session TBA

210min

Training session TBA

210min

Training Session

210min

After event mingle

90min


Registration

60min

Welcome

15min

Keynote

30min

Keynote

30min

Coffee break

30min

Keynote

30min

Keynote

30min

Lunch

60min

Deployment tools for OpenStack - a sighting

Deployment frameworks are an integral part of OpenStack. There is a great variety of tools and frameworks, so a talk with an overview would be of value for everyone considering a rollout of an OpenStack cloud.

Since the beginnings of OpenStack, deployment tools and frameworks have been around. An environment as complex as OpenStack is nothing to install, configure and operate manually. Over the years, many different toolsets to handle these tasks have emerged. The most well-known among them include Kolla, OpenStack-Ansible, Crowbar, TripleO, conjure-up and DriveTrain. These frameworks differ in the underlying tools used for bare metal deployment (MaaS and Sledgehammer), for configuration management (Puppet, Ansible or Chef) and user interface from command line to WebUI. TripleO even uses OpenStack to deploy OpenStack. Some deploy OpenStack directly on the machines, others use Docker or LXD containers. To complicate things even further, do not forget to consider the following when choosing your deployment framework: Is enterprise support available? Is it backed by an active developer community and are more than one or two companies contributing to core its development?

40min

See how SUSE enables service delivery via OpenStack, Kubernetes managed conatiners and Cloud Foundry

SUSE is working with these leading projects enabling the rapid deployment of dynamic services through software definition. Most of the session will be a practical demonstration of the approaches available to use these three major technology and how SUSE has simplified the usage of this tooling.

SUSE has built a technology stack with OpenStack at it’s core, which allows organisations to rapidly build Private Clouds and the deliver services from these based on VM’s, Kubernetes managed Application Containers and Bare Metal through the use of Software Defined Infrastructure. This session would help show how these technologies combine with SUSE power tools allowing our customers to concentrate on building the services from their clouds rather than building them.

40min

A DevOps State of Mind: Continuous Security with Kubernetes

Is your organization ready to address the security risks with containers for your DevOps environment? In this presentation, you’ll learn about best practices for the top security risks in your container environment and how to automate and integrate security in your DevOps CI/CD pipeline.

With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, Security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment?  In this presentation, you’ll learn about:

  • Best practices for addressing the top container security risks in a container environment including images, builds, registry, deployment, hosts, network, storage, APIs, monitoring & logging, and federation.
  • Automating and integrating security vulnerability management & compliance checking for container images in a DevOps CI/CD pipeline
  • Deployment strategies for deploying container security updates including recreate, rolling, blue/green, canary and a/b testing.
40min

Improve performance and security for containers using Kuryr and Cilium

Containers in OpenStack are getting more traction every day but there are still concerns about security and performance. In this talk we will describe how to overcome those problems combining the flexibility of SDN and the reliability and speed of the linux kernel.

Cilium is on open source project which implements Kubernetes network policies and provides container network security by using eBPF and XDP packet filtering in the Linux kernel. Kuryr is the OpenStack project that enables native Neutron-based networking in Kubernetes. In this talk we will describe the work that we’ve done to provide Cilium as CNI plugin and how we used Kuryr to integrate it into OpenStack. We will demonstrate how to deploy and configure a Kubernetes cluster using the Cilium-Kuryr integration. We will explain how Cilium provides L7 network policies and its “native routing” mode, where it just allows any routing daemon to route the traffic. We will illustrate Cilium’s features using concrete examples. Thanks to native packet filtering Cilium boosts performance, we will show tests results to measure how Cilium improves throughput compared to other CNI plugins.

40min

Mastering Edge Cloud with StarlingX and Akraino

In this presentation, we’ll discuss how the OpenStack StarlingX project addresses current challenges, using code that is also contributed to the open source Akraino Edge Stack project launched by the Linux Foundation.

In the telecom market, communications service providers worldwide are increasingly viewing applications hosted at the network edge as compelling business opportunities. Some examples of those applications and functions are Multi-access Edge Computing, universal CPE, virtual CPE and virtual RAN.

In parallel, the industrial market is now undergoing a digital transformation termed “Industry 4.0“. Industrial IOT applications represent new business opportunities, with new kinds of services (asset monitoring, analytics, business processes, etc.) delivered to new categories of customers (manufacturing facilities, car dealers, city governments, hospitals, etc.).

Many of these application and services are required to be hosted at the network edge, either to enable ultra-low latency connectivity (process control), to perform on-premise analytics (patient monitoring), or to minimize backhaul traffic (video surveillance). Edge compute solutions are therefore a key requirement as companies exploit business opportunities either in the telecom market or in Industrial IOT.

In this presentation, we’ll discuss how the OpenStack StarlingX project addresses those challenges, using code that is also contributed to the open source Akraino Edge Stack project launched by the Linux Foundation. We’ll explain how Wind River‘s contributions to these two projects will help telecom and industrial companies to streamline the installation, commissioning, and maintenance of their edge clouds, thereby minimizing their operational costs and reducing their schedule risk.

40min

Multicloud CI/CD with OpenStack and Kubernetes

In this session we’ll discuss and demo how to leverage OpenStack to build multicloud Federated Kubernetes clusters across several OpenStack Public Clouds.

With over 50 Public cloud regions in the OpenStack Passport program, distributing workloads closer to customers is now a reality.

Introduction
We’ll start by presenting the benefits of multi cloud and how Kubernetes and OpenStack fit in that strategy.

Multicloud architecture
Then we’ll explain the setup of a geo-distributed Kubernetes environment and this translates in OpenStack terms.

Cloud agnostic tools
An important part of multicloud is to have adapted tooling, so we’ll compare several tools to manage OpenStack resources (heat, ansible, terraform, …) and to install Kubernetes (kops, kubespray, kubeadm, …) in a cloud agnostic way.

Demo
We’ll put it all to the test with a live demo with a CI/CD application deployment across the globe.

40min

Coffee break

15min

PostgreSQL provisioning and management with OpenStack Trove

Outline of Presentation:

  1. Introduction to OpenStack Trove
  2. Trove database agnostic architecture
  3. PostgreSQL feature matrix
  4. Our contribution to PostgreSQL driver
  5. Live demo
  6. Questions

Introduction to OpenStack TroveOpenStack Trove is a Database as a Service platform which simplifies life cycle management of database technologies. It is comparable to AWS RDS and Google’s CloudSQL. Trove is an database agnostic platform and it is built on a common set of design principles for every database. Through Trove, DB instances can provisioned, managed as a first class resources of OpenStack.You can implement both SQL and NoSQL databases through trove. As of now following databases are supported

  • SQL: MySQL, MariaDB, PostgreSQL, Vertica, DB2
  • NoSQL: MongoDB, Cassandra, Redis, Couchbase

Trove Database Agnostic ArchitectureTrove is designed to support a single-tenant database within a Nova instance. Trove interacts with all other OpenStack components like nova,neutron,glance,swift,cinder etc purely through API.Trove currently comprised of the following major components: 1. API Server 2. Task Manager 3. Guest Agent 4. ConductorAPI Server

  • Trove API server provides a RESTful API that supports JSON and XML to provision and manage Trove instances.
  • API Server communicates to the Task Manager to handle complex, asynchronous tasks and it will also talk directly to the guest agent to handle simple tasks such as retrieving a list of DB users. Its main job is to take requests, turn them into messages, validate them, and forward them on to the Task Manager or Guest Agent.

Task Manager

  • Task Manager service provisions instances, manages the lifecycle of DB instances, and performs operations like resizing,backup etc.. on the instance.
  • It takes messages from the API Server, and acts according to them. A few complex tasks, for example, are resize database flavor and create instance. They both require HTTP calls to OpenStack services, as well as polling those services until the instance becomes active, and also sending messages to the Guest Agent. The Task Manager handles the flow of processes as they occur across multiple, distributed systems.

Guest Agent

  • Guest Agent is a service that runs within the guest DB instance and responsible for performing operations on the datastore itself.
  • It is in charge of bringing a datastore online. The Guest Agent also sends heartbeat messages to the API via conductor.
  • Each datastore implementation has a Guest Agent implementation in charge of doing specific tasks for that datastore. For instance, a Redis guest agent will behave in different ways than a PostgreSQL guest.

Conductor

  • Conductor is responsible for recieving messages from guest instances to update information on the host.
  • Conductor listens for RPC messages through the message bus and performs the relevant operation.
  • Conductor is similar to guest-agent in that it is a service that listens to a RabbitMQ topic. The difference is conductor lives on the host, not the guest. Guest agents communicate to conductor by putting messages on the topic defined in config as conductor_queue. By default this is “trove-conductor”.

PostgreSQL feature matrix and its implementation

  • Support for PostgreSQL 9.4, 9.6,10.4.
  • Custom database instance types with Flavors.
  • Able to automatically increase storage size as needed.
  • Support for secure external connections with the SSL/TLS protocol.
  • Automated and on-demand filesystem level backups.
  • Able to do full and incremental backups and restore them.
  • Create Multiple Read Replicas and promote.

Our contributions to PostgreSQL driver

  • PostgreSQL driver in the upstream only have support up to 9.4. We extend driver support to 9.6 and 10.4.
  • We have introduced Automatic backup facility into Trove to backup DB instances automatically and save the backups in swift storage for specified time.

Live DemoIn this Live demo i will show the audience following:

  • Creating PostgreSQL 9.6,10.4 instances.
  • Resizing the volumes
  • Tuning database
  • Backup, restore and Automatic Backup
  • Creating Read replicas

Questions

40min

OpenStack - the good, the bad and what's missing?

What’s the best with OpenStack? What are things that are not as good as they need to be? And, what is completely missing in OpenStack for you? Operators and cloud users - your feedback is highly valued and this is a good chance for you to provide your feedback to the OpenStack community!

OpenStack has been around for quite some time now and is considered by many to be the obvious choice when setting up a private cloud. What we often seem to forget is that OpenStack is also growing as one of the prominent choices for public clouds around the globe and that it, in many cases, is the perfect solution for hybrid cloud usage thanks to being open source and the lock-in aspect.

Core developers of the OpenStack projects as well as decision makers within the community are eager to get feedback from operators of OpenStack clouds and end users consuming OpenStack powered services.

What are the biggest headaches for operators?
Which features - big or small - are missing according to consumers?

This is a good opportunity for you to share your feedback with the OpenStack community. This is NOT a regular presentation where I tell you what is good, bad or missing in OpenStack - this is a “Forum session” - a session where we discuss and collaborate, with the goal of collecting useful feedback that will be fed into the right channels within the OpenStack community.

In the spirit of community, and as Chair of the OpenStack Public Cloud Working Group, join us in this discussion and help OpenStack evolve in the right direction!

40min

The road to OpenStack at UK largest Academic private cloud system: University of Edinburgh

The talk will reveal why the University of Edinburgh decided to implement an OpenStack-based private cloud system, what are the benefits to adopt a self-served computing environment and how was possible to allow the University to flexibly scale up to 200+ hypervisor servers.

This presentation aims to talk about the UK’s largest academic private cloud production: University of Edinburgh’s OpenStack system.

The biggest challenge we faced was to be able to provide the University’s researchers with more flexible self-served computing services. The OpenStack-based private cloud solution proposed by Sardina Systems was operational within 8 weeks from commencement of design process!

In the presentation we plan to cover the following for our audience: - the predefined constrains that came from the university’s side - the solution we have proposed to the university, compasing all 3 phases of OpenStack: deploy, operate and upgrade - the benefits of getting an OpenStack-based private cloud system in the academic sector.

University of Edinburgh is one of the leading universities in the UK, ranked 19th in the world, 6th in Europe and 4th in the UK, and has decided to implement an OpenStack based private cloud system for their research services.

40min

Upgrading the OpenStack and OS engines mid-flight: skipping releases without crashing the plane

So you want to upgrade your production cloud, but you would like to skip releases. While doing that you might also need to upgrade the operating system of all your cloud nodes. And of course, you don’t want to disrupt running workloads in the process. Are you asking for too much?

In this talk I want to share the lessons we learned while developing an automated upgrade solution that achieves all these requirements.  We will talk about necessary preparation and prerequisites for reaching the goal, such as clear SLAs, effective backups, staged testing, and an HA architecture. The talk will address challenges around component and protocol version compatibilities, orchestrating many complex steps such as package upgrades, reboots, DB migrations in the right order, and various other problems that we’ve encountered on the road to this solution.

40min

The Little Bag O'Tricks: 10 things you might not know you can do with OpenStack

OpenStack is widespread, popular, and well-documented — and yet, it has many features that are exceedingly useful but little-known. For this talk, I’ve picked 10 things about OpenStack that you might not know yet, though you might wish you did.

OpenStack is a well-established Infrastructure-as-a-Service (IaaS) platform and is used in many public and private clouds for a large array of purposes. And yet, OpenStack is often underrated in that it has many very helpful features that are woefully under-used. Some of that is due to the fact that several of these features are simply absent from competing IaaS platforms. In this talk, I am sharing ten things that are available in all or many OpenStack environments that even experienced OpenStack users or operators might not know about:

  • Understanding virtual machine suspension, and using it to great cost-savings effect in OpenStack public clouds
  • Using snapshotting and rollback for complete, arbitrarily complex virtual environments managed by Heat
  • Rapidly converting images in Glance, without down- and reuploading images
  • Understanding what you can and can’t do with nested KVM virtualization (running your own VMs inside your Nova VMs)
  • Using the Cinder image-volume cache
  • Ensuring rapid, sub-minute VM spin-up in Ceph-backed OpenStack environment, regardless of image or snapshot size
  • Making the best use of availability zones across Nova, Cinder, and Neutron
  • Facilitating virtual machine configuration with config-drive, Nova user-data, cloud-init, and #cloud-config
  • Helping public cloud users save money by providing them with virtual machines that auto-expire
  • Finding out which OpenStack version and platform you are running on in a public cloud, from within a virtual machine
40min

Self-driving data centers with OpenStack

Today’s AI hype promises smart cities and self-driving cars, but little attention is given to the data centers that will power this new world. This talk argues why self-driving data centers are needed, discusses how to build them, and examplifies with intelligent OpenStack autoscaling.

The DevOps movement has brought great improvement to the speed and scale at which cloud services are developed and operated. Today’s data center operators can thanks to automation and an maturing OpenStack, commonly combined with a container (orchestration) platform, handle increasingly larger infrastructures and more frequent deploys. However, despite automation, there are still many tasks that are hard, error-prone, and/or frustrating.

Example challenges and questions include: how much CPU and RAM should I specify for a VM? By how much should I scale up our infrastructure for next big event or sale? What is causing my application to run slow, and how do I fix it? Why does our service always go down during my call hours, and in the middle of the night?

To answer these questions, automation is not enough, but intelligence is needed. This talk argues that through artificial intelligence, data centers can automatically resolve these questions and thus greatly relieve operators. Similar ideas are explored extensively within self-driving vehicles, AI chatbots, and/or fraud detection. This talk aim to describe the main principles behind designing self-configuring, self-healing, and self-optimizing data center capabilities to the OpenStack community. We discuss our experiences in tooling with focus on monitoring, storing, and analyzing the operational data needed for intelligent decision making. We also describe our lessons learned on how to build robust and production-ready self-management systems, and report on the pitfalls we encountered. The overall concepts of the talk will be illustrated with a demo where predictive analytics are used for proactive autoscaling in a Kubernetes and OpenStack context.

40min

After event mingle

90min