How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open-sourced five years ago.

Today, Kubernetes is the fastest growing open-source project, and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

AWS launches an undo feature for its Aurora database service

Aurora, AWS’s managed MySQL and PostgreSQL database service, is getting an undo feature. As the company announced today, the new Aurora Backtrack feature will allow developers to “turn back time.” For now, this only works for MySQL databases, though. Developers have to opt in to this feature and it only works for newly created database clusters or clusters that have been restored from backup.

The service does this by keeping a log of all transactions for a set amount of time (up to 72 hours). When things go bad after you dropped the wrong table in your production database, you simply pause your application and select the point in time that you want to go back to. Aurora will then pause the database, too, close all open connections and drop anything that hasn’t been committed yet, before rolling back to its state before the error occurred.

Being able to reverse transactions isn’t completely new, of course. Many a database system has implemented some version of this already, including MySQL, though they are often far more limited in scope compared to what AWS announced today.

As AWS Chief Evangelist Jeff Barr notes in today’s announcement, disaster recovery isn’t the only use case here. “I’m sure you can think of some creative and non-obvious use cases for this cool new feature,” he writes. “For example, you could use it to restore a test database after running a test that makes changes to the database. You can initiate the restoration from the API or the CLI, making it easy to integrate into your existing test framework.”

Aurora Backtrack is now available to all developers. It will cost about $0.012 per one million change records for databases hosted in the company’s U.S. regions, with slightly higher prices in Europe and Asia.

Fantasmo is a decentralized map for robots and augmented reality

“Whether for AR or robots, anytime you have software interacting with the world, it needs a 3D model of the globe. We think that map will look a lot more like the decentralized internet than a version of Apple Maps or Google Maps.” That’s the idea behind new startup Fantasmo, according to co-founder Jameson Detweiler. Coming out of stealth today, Fantasmo wants to let any developer contribute to and draw from a sub-centimeter accuracy map for robot navigation or anchoring AR experiences.

Fantasmo plans to launch a free Camera Positioning Standard (CPS) that developers can use to collect and organize 3D mapping data. The startup will charge for commercial access and premium features in its TerraOS, an open-sourced operating system that helps property owners keep their maps up to date and supply them for use by robots, AR and other software equipped with Fantasmo’s SDK.

With $2 million in funding led by TenOneTen Ventures, Fantasmo is now accepting developers and property owners to its private beta.

Directly competing with Google’s own Visual Positioning System is an audacious move. Fantasmo is betting that private property owners won’t want big corporations snooping around to map their indoor spaces, and instead will want to retain control of this data so they can dictate how it’s used. With Fantasmo, they’ll be able to map spaces themselves and choose where robots can roam or if the next Pokémon GO can be played there.

“Only Apple, Google, and HERE Maps want this centralized. If this data sits on one of the big tech company’s servers, they could basically spy on anyone at any time,” says Detweiler. The prospect gets scarier when you imagine everyone wearing camera-equipped AR glasses in the future. “The AR cloud on a central server is Big Brother. It’s the end of privacy.”

Detweiler and his co-founder Dr. Ryan Measel first had the spark for Fantasmo as best friends at Drexel University. “We need to build Pokémon in real life! That was the genesis of the company,” says Detweiler. In the meantime he founded and sold LaunchRock, a 500 Startups company for creating “Coming Soon” sign-up pages for internet services.

After Measel finished his PhD, the pair started Fantasmo Studios to build augmented reality games like Trash Collectors From Space, which they took through the Techstars accelerator in 2015. “Trash Collectors was the first time we actually created a spatial map and used that to sync multiple people’s precise position up,” says Detweiler. But while building the infrastructure tools to power the game, they realized there was a much bigger opportunity to build the underlying maps for everyone’s games. Now the Santa Monica-based Fantasmo has 11 employees.

“It’s the internet of the real world,” says Detweiler. Fantasmo now collects geo-referenced photos, scans them for identifying features like walls and objects, and imports them into its point cloud model. Apps and robots equipped with the Fantasmo SDK can then pull in the spatial map for a specific location that’s more accurate than federally run GPS. That lets them peg AR objects to precise spots in your environment while making sure robots don’t run into things.

Fantasmo identifies objects in geo-referenced photos to build a 3D model of the world

“I think this is the most important piece of infrastructure to be built during the next decade,” Detweiler declares. That potential attracted funding from TenOneTen, Freestyle Capital, LDV, NoName Ventures, Locke Mountain Ventures and some angel investors. But it’s also attracted competitors like Escher Reality, which was acquired by Pokémon GO parent company Niantic, and Ubiquity6, which has investment from top-tier VCs like Kleiner Perkins and First Round.

Google is the biggest threat, though. With its industry-leading traditional Google Maps, experience with indoor mapping through Tango, new VPS initiative and near limitless resources. Just yesterday, Google showed off using an AR fox in Google Maps that you can follow for walking directions.

Fantasmo is hoping that Google’s size works against it. The startup sees a path to victory through interoperability and privacy. The big corporations want to control and preference their own platforms’ access to maps while owning the data about private property. Fantasmo wants to empower property owners to oversee that data and decide what happens to it. Measel concludes, “The world would be worse off if GPS was proprietary. The next evolution shouldn’t be any different.”

Targetprocess lands Series A 14 years after launching

Targetprocess launched in 2004 in Minsk, Belarus with a mission of making it simpler to manage agile-driven programming projects. It announced it has taken its first funding in its 14-year history, a $5 million Series A led by the European Bank for Reconstruction and Development and Zubr Capital, a private equity firm in Minsk.

Why take money after all these years? It’s a long journey from 2004 to now, but Andrey Mihailenko, co-founder of Targetprocess, says the time is simply right to take on more money to expand its market vision. “The goal of taking on this funding is to get bigger. We see the opportunity right now because more companies understand the value of agile to provide faster response to change agents and quicker delivery,” he said.

He said the founders often debated over the years when to take on external investment, but decided to wait until they felt it was the right time to expand. “We delayed because venture capital is not just about money, but giving up some control and having someone else influence some of the decisions. We wanted our vision fulfilled and now seems like a perfect time because agile is [moving beyond] IT into other parts of the organization,” Mihailenko explained.

Like many startups, this one was born out of necessity when one of Mihailenko’s co-founders became fascinated with the agile programming methodology. When he couldn’t find tools to adequately manage the process, he decided to build them, and from that early work Targetprocess was born.

Today, he says his company focuses on agile teams of all sizes as the agile concept has become popularized over time and mainstreamed as an accepted practice. “Our focus is on providing a platform to enable agile teams to visualize the workflow, how they work and making sure their priorities align and that they work in an agile way,” Mihailenko said.

Their persistence appears to have paid off. From the five co-founders, they have grown to 115 employees with over 1000 clients worldwide, according to Mihailenko.The development team remains in Minsk, but they have small offices in Buffalo, NY, London and Berlin.

They plan to use the money to push into new markets by hiring new sales and marketing professionals, who can help them expand and grow. They also intend to enhance the R&D team in Minsk and expect to reach 160 employees in the next 12-18 months.

8 big announcements from Google I/O 2018

Google kicked off its annual I/O developer conference at Shoreline Amphitheater in Mountain View, California. Here are some of the biggest announcements from the Day 1 keynote. There will be more to come over the next couple of days, so follow along on everything Google I/O on TechCrunch. 

Google goes all in on artificial intelligence, rebranding its research division to Google AI

Just before the keynote, Google announced it is rebranding its Google Research division to Google AI. The move signals how Google has increasingly focused R&D on computer vision, natural language processing, and neural networks.

Google makes talking to the Assistant more natural with “continued conversation”

What Google announced: Google announced a “continued conversation” update to Google Assistant that makes talking to the Assistant feel more natural. Now, instead of having to say “Hey Google” or “OK Google” every time you want to say a command, you’ll only have to do so the first time. The company also is adding a new feature that allows you to ask multiple questions within the same request. All this will roll out in the coming weeks.

Why it’s important: When you’re having a typical conversation, odds are you are asking follow-up questions if you didn’t get the answer you wanted. But it can be jarring to have to say “Hey Google” every single time, and it breaks the whole flow and makes the process feel pretty unnatural. If Google wants to be a significant player when it comes to voice interfaces, the actual interaction has to feel like a conversation — not just a series of queries.

Google Photos gets an AI boost

What Google announced: Google Photos already makes it easy for you to correct photos with built-in editing tools and AI-powered features for automatically creating collages, movies and stylized photos. Now, Photos is getting more AI-powered fixes like B&W photo colorization, brightness correction and suggested rotations. A new version of the Google Photos app will suggest quick fixes and tweaks like rotations, brightness corrections or adding pops of color.

Why it’s important: Google is working to become a hub for all of your photos, and it’s able to woo potential users by offering powerful tools to edit, sort, and modify those photos. Each additional photo Google gets offers it more data and helps them get better and better at image recognition, which in the end not only improves the user experience for Google, but also makes its own tools for its services better. Google, at its heart, is a search company — and it needs a lot of data to get visual search right.

Google Assistant and YouTube are coming to Smart Displays

What Google announced: Smart Displays were the talk of Google’s CES push this year, but we haven’t heard much about Google’s Echo Show competitor since. At I/O, we got a little more insight into the company’s smart display efforts. Google’s first Smart Displays will launch in July, and of course will be powered by Google Assistant and YouTube . It’s clear that the company’s invested some resources into building a visual-first version of Assistant, justifying the addition of a screen to the experience.

Why it’s important: Users are increasingly getting accustomed to the idea of some smart device sitting in their living room that will answer their questions. But Google is looking to create a system where a user can ask questions and then have an option to have some kind of visual display for actions that just can’t be resolved with a voice interface. Google Assistant handles the voice part of that equation — and having YouTube is a good service that goes alongside that.

Google Assistant is coming to Google Maps

What Google announced: Google Assistant is coming to Google Maps, available on iOS and Android this summer. The addition is meant to provide better recommendations to users. Google has long worked to make Maps seem more personalized, but since Maps is now about far more than just directions, the company is introducing new features to give you better recommendations for local places.

The maps integration also combines the camera, computer vision technology, and Google Maps with Street View. With the camera/Maps combination, it really looks like you’ve jumped inside Street View. Google Lens can do things like identify buildings, or even dog breeds, just by pointing your camera at the object in question. It will also be able to identify text.

Why it’s important: Maps is one of Google’s biggest and most important products. There’s a lot of excitement around augmented reality — you can point to phenomena like Pokémon Go — and companies are just starting to scratch the surface of the best use cases for it. Figuring out directions seems like such a natural use case for a camera, and while it was a bit of a technical feat, it gives Google yet another perk for its Maps users to keep them inside the service and not switch over to alternatives. Again, with Google, everything comes back to the data, and it’s able to capture more data if users stick around in its apps.

Google announces a new generation for its TPU machine learning hardware

What Google announced: As the war for creating customized AI hardware heats up, Google said that it is rolling out its third generation of silicon, the Tensor Processor Unit 3.0. Google CEO Sundar Pichai said the new TPU is 8x more powerful than last year per pod, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations.

Why it’s important: There’s a race to create the best machine learning tools for developers. Whether that’s at the framework level with tools like TensorFlow or PyTorch or at the actual hardware level, the company that’s able to lock developers into its ecosystem will have an advantage over the its competitors. It’s especially important as Google looks to build its cloud platform, GCP, into a massive business while going up against Amazon’s AWS and Microsoft Azure. Giving developers — who are already adopting TensorFlow en masse — a way to speed up their operations can help Google continue to woo them into Google’s ecosystem.

MOUNTAIN VIEW, CA – MAY 08: Google CEO Sundar Pichai delivers the keynote address at the Google I/O 2018 Conference at Shoreline Amphitheater on May 8, 2018 in Mountain View, California. Google’s two day developer conference runs through Wednesday May 9. (Photo by Justin Sullivan/Getty Images)

Google News gets an AI-powered redesign

What Google announced: Watch out, Facebook . Google is also planning to leverage AI in a revamped version of Google News. The AI-powered, redesigned news destination app will “allow users to keep up with the news they care about, understand the full story, and enjoy and support the publishers they trust.” It will leverage elements found in Google’s digital magazine app, Newsstand and YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summary or a more holistic view of a news story.

Why it’s important: Facebook’s main product is literally called “News Feed,” and it serves as a major source of information for a non-trivial portion of the planet. But Facebook is embroiled in a scandal over personal data of as many as 87 million users ending up in the hands of a political research firm, and there are a lot of questions over Facebook’s algorithms and whether they surface up legitimate information. That’s a huge hole that Google could exploit by offering a better news product and, once again, lock users into its ecosystem.

Google unveils ML Kit, an SDK that makes it easy to add AI smarts to iOS and Android apps

What Google announced: Google unveiled ML Kit, a new software development kit for app developers on iOS and Android that allows them to integrate pre-built, Google-provided machine learning models into apps. The models support text recognition, face detection, barcode scanning, image labeling and landmark recognition.

Why it’s important: Machine learning tools have enabled a new wave of use cases that include use cases built on top of image recognition or speech detection. But even though frameworks like TensorFlow have made it easier to build applications that tap those tools, it can still take a high level of expertise to get them off the ground and running. Developers often figure out the best use cases for new tools and devices, and development kits like ML Kit help lower the barrier to entry and give developers without a ton of expertise in machine learning a playground to start figuring out interesting use cases for those appliocations.

So when will you be able to actually play with all these new features? The Android P beta is available today, and you can find the upgrade here.

Google opens Instant Apps to all game developers

Instant Apps for Android have been one of Google’s most interesting technologies for mobile developers. In their earliest incarnation, Instant Apps were mostly useful for developers of relatively straightforward apps. Earlier this year, Google launched its beta of Instant Apps for games, too, which allows players to get a sense of the gameplay before actually installing the full game. Until now, this was only available to a small number of game developers, but starting today, all game developers will be able to build instant apps and showcase them in the Google Play store and anywhere else a user can tap on a link.

In today’s announcement, Google also notes that it has started testing Google Play Instant compatibility with AdWords, so that developers can direct users directly to their game after they tap on an ad. It’s unclear when exactly Google plans to roll out support for these ads, though.

The showcase app for today’s launch is Candy Crush Saga, an app that probably doesn’t need the extra promotion, thanks to its more than 500 million installs on Android already.

Regular Instant Apps have to be less than 2 MB in size. That obviously isn’t a realistic restriction for games, which have far more graphical assets, for example, to fit within this limit. So for games, Google went with a 10 MB limit and, based on what I’ve seen from some of the apps that were already in the Google Play store, those apps still load extremely fast (and to put that 10 MB limit into perspective, it’s worth remembering that many a website weigh in at significantly more than that).

Google makes its Material Design system easier to customize

Since 2014, Material Design has been Google’s design language for its apps. Now, the company is greatly expanding its services around its design system by offering a set of new tools around theming and working on design iterations, as well as new open-source components that developers can implement into their own apps. In addition to these, Google is making Material Gallery, the same tool it uses to help its designers to collaborate on designs, available to everybody.

All of these new features are now available at the redesigned Material.io site.

Google isn’t making any major changes to the overall design language, but it is making it easier for developers to adapt Material Design to their own projects, and two of today’s launches focus solely on this. The first is Material Theming, that is, the ability to make a small change to say the color or typography and have that applied across the theme.

“Theming lets anyone consistently and systematically express their unique style across a product,” the team explains. “When you make just a few decisions about color and typography, for example, it’s simple to apply the direction throughout the environment.”

Google itself is already using this system and notes that any company can now easily tweak the system to its own brand guidelines.

Tweaking these designs still takes a good bit of work, so the second new feature — the Material Theme Editor — now makes it far easier to try out new designs. It gives developers a control panel that makes it easy to apply global style changes to color, typography and shape.

One nifty feature here is that the Editor will allow you to export your own Material theme based on your designs. While the tweaks that you can apply are still a bit limited, Google says that it’ll add more customization options over time.

Right now, the Editor only works with the popular Sketch design app and you can start using it by downloading the Material plugin for Sketch.

In addition to the work on theming components, Google also launched new Icon sets for Material Design today. These new icon themes can be customized, as well, and are available in baseline, round, two-tone, sharp and outlined variations.

And while theming is the highlight of this release, Google also today announced that it’s working on a number of new Material Components, that is, a set of pre-built design components. These will launch later this year.

The real highlight of the release may be Material Gallery, though. “Now anyone can use Material Gallery to review and comment on design iterations,” the Material Design team writes today. It’s the same tool that Google designers have used for years to collaborate on designs in-house, and now it’s out of beta and open to all.” The Gallery tool lets designers comment on their colleagues’ designs, no matter whether that’s an image or a video frame.

Google notes that the Gallery isn’t just for sharing and collaborating on designs but that it will also allow developers to take those designs and bring them into the Theme Editor.

You can now run Linux apps on Chrome OS

For the longest time, developers have taken Chrome OS machines and run tools like Crouton to turn them into Linux-based developer machines. That was a bit of a hassle, but it worked. But things are getting easier. Soon, if you want to run Linux apps on your Chrome OS machine, all you’ll have to do is switch a toggle in the Settings menu. That’s because Google is going to start shipping Chrome OS with a custom virtual machine that runs Debian Stretch, the current stable version of the operating system.

It’s worth stressing that we’re not just talking about a shell here, but full support for graphical apps, too. That means you could now, for example, run Microsoft’s Linux version of Visual Studio Code right on your Chrome OS machine. Or build your Android app in Android Studio and test it right on your laptop, thanks to the built-in support for Android apps that came to Chrome OS last year.

The first preview of Linux on Chrome OS is now available on the Pixelbook, with support for more devices coming soon.

Google’s PM director for Chrome OS Kan Liu told me the company was obviously aware that people were using Crouton to do this before. But doing this also meant doing away with all of the security features that come with Google’s operating system. And as more powerful Chrome OS machines hit the market in recent years, the demand for a feature like this also grew.

To enable support for graphical apps, the team opted to integrate the Wayland display server; from the user’s perspective, the actual window dressing will look the same as any other Android or web app on Chrome OS.

Most regular users won’t necessarily benefit from built-in Linux support, but this will make Chrome OS machines even more attractive to developers — especially the more high-end ones like Google’s own Pixelbook. Liu stressed that his team spent quite a bit of work optimizing the virtual machine, too, so there isn’t a lot of overhead when you run Linux apps — meaning that even less powerful machines should be able to handle a code editor without issues.

Now, it’s probably only a matter of hours before somebody starts running Windows apps in Chrome OS with the help of the Wine emulator.

Watch Google I/O developer keynote live right here

Google I/O is nowhere near done. While the mainstream keynote just ended, the company is about to unveil the next big things when it comes to APIs, SDKs, frameworks and more.

The developer keynote starts at 12:45 PM Pacific Time (3:45 PM on the East Cost, 8:45 PM in London, 9:45 PM in Paris) and you can watch the live stream right here on this page.

If you’re an Android developer, this is where you’ll get the juicy details about the next version of Android. You can expect new possibilities and developer tools for you and your company. We’ll have a team on the ground to cover the best bits right here on TechCrunch.

Watch Google I/O keynote live right here

How did you find Microsoft Build yesterday? We don’t really have time for your answer because Google I/O is already here! Google is kicking off its annual developer conference today. As usual, there will be a consumer keynote with major new products in the morning, and a developer-centric keynote in the afternoon.

The conference starts at 10 AM Pacific Time (1 PM on the East Cost, 6 PM in London, 7 PM in Paris) and you can watch the live stream right here on this page. The developer keynote will be at 12:45 PM Pacific Time.

Rumor has it that Google is about to share more details about Android P, the next major release of its Android platform. But you can also expect some Google Assistant and Google Home news, some virtual reality news and maybe even some Wear OS news. We have a team on the ground ready to cover the event, so don’t forget to read TechCrunch to get our take on today’s news.