Microsoft and Oracle link up their clouds

Microsoft and Oracle announced a new alliance today that will see the two companies directly connect their clouds over a direct network connection so that their users can then move workloads and data seamlessly between the two. This alliance goes a bit beyond just basic direct connectivity and also includes identity interoperability.

This kind of alliance is relatively unusual between what are essentially competing clouds, but while Oracle wants to be seen as a major player in this space, it also realizes that it isn’t likely to get to the size of an AWS, Azure or Google Cloud anytime soon. For Oracle, this alliance means that its users can run services like the Oracle E-Business Suite and Oracle JD Edwards on Azure while still using an Oracle database in the Oracle cloud, for example. With that, Microsoft still gets to run the workloads and Oracle gets to do what it does best (though Azure users will also continue be able to run their Oracle databases in the Azure cloud, too).

“The Oracle Cloud offers a complete suite of integrated applications for sales, service, marketing, human resources, finance, supply chain and manufacturing, plus highly automated and secure Generation 2 infrastructure featuring the Oracle Autonomous Database,” said Don Johnson, executive vice president, Oracle Cloud Infrastructure (OCI), in today’s announcement. “Oracle and Microsoft have served enterprise customer needs for decades. With this alliance, our joint customers can migrate their entire set of existing applications to the cloud without having to re-architect anything, preserving the large investments they have already made.”

For now, the direct interconnect between the two clouds is limited to Azure US East and Oracle’s Ashburn data center. The two companies plan to expand this alliance to other regions in the future, though they remain mum on the details. It’ll support applications like JD Edwards EnterpriseOne, E-Business Suite, PeopleSoft, Oracle Retail and Hyperion on Azure, in combination with Oracle databases like RAC, Exadata and the Oracle Autonomous Database running in the Oracle Cloud.

“As the cloud of choice for the enterprise, with over 95% of the Fortune 500 using Azure, we have always been first and foremost focused on helping our customers thrive on their digital transformation journeys,” said Scott Guthrie, executive vice president of Microsoft’s Cloud and AI division. “With Oracle’s enterprise expertise, this alliance is a natural choice for us as we help our joint customers accelerate the migration of enterprise applications and databases to the public cloud.”

Today’s announcement also fits within a wider trend at Microsoft, which has recently started building a number of alliances with other large enterprise players, including its open data alliance with SAP and Adobe, as well as a somewhat unorthodox gaming partnership with Sony.

How Kubernetes came to rule the world

Open source has become the de facto standard for building the software that underpins the complex infrastructure that runs everything from your favorite mobile apps to your company’s barely usable expense tool. Over the course of the last few years, a lot of new software is being deployed on top of Kubernetes, the tool for managing large server clusters running containers that Google open-sourced five years ago.

Today, Kubernetes is the fastest growing open-source project, and earlier this month, the bi-annual KubeCon+CloudNativeCon conference attracted almost 8,000 developers to sunny Barcelona, Spain, making the event the largest open-source conference in Europe yet.

To talk about how Kubernetes came to be, I sat down with Craig McLuckie, one of the co-founders of Kubernetes at Google (who then went on to his own startup, Heptio, which he sold to VMware); Tim Hockin, another Googler who was an early member on the project and was also on Google’s Borg team; and Gabe Monroy, who co-founded Deis, one of the first successful Kubernetes startups, and then sold it to Microsoft, where he is now the lead PM for Azure Container Compute (and often the public face of Microsoft’s efforts in this area).

Google’s cloud and the rise of containers

To set the stage a bit, it’s worth remembering where Google Cloud and container management were five years ago.

AWS launches an undo feature for its Aurora database service

Aurora, AWS’s managed MySQL and PostgreSQL database service, is getting an undo feature. As the company announced today, the new Aurora Backtrack feature will allow developers to “turn back time.” For now, this only works for MySQL databases, though. Developers have to opt in to this feature and it only works for newly created database clusters or clusters that have been restored from backup.

The service does this by keeping a log of all transactions for a set amount of time (up to 72 hours). When things go bad after you dropped the wrong table in your production database, you simply pause your application and select the point in time that you want to go back to. Aurora will then pause the database, too, close all open connections and drop anything that hasn’t been committed yet, before rolling back to its state before the error occurred.

Being able to reverse transactions isn’t completely new, of course. Many a database system has implemented some version of this already, including MySQL, though they are often far more limited in scope compared to what AWS announced today.

As AWS Chief Evangelist Jeff Barr notes in today’s announcement, disaster recovery isn’t the only use case here. “I’m sure you can think of some creative and non-obvious use cases for this cool new feature,” he writes. “For example, you could use it to restore a test database after running a test that makes changes to the database. You can initiate the restoration from the API or the CLI, making it easy to integrate into your existing test framework.”

Aurora Backtrack is now available to all developers. It will cost about $0.012 per one million change records for databases hosted in the company’s U.S. regions, with slightly higher prices in Europe and Asia.

Google to acquire cloud migration startup Velostrata

Google announced today it was going to acquire Israeli cloud migration startup, Velostrata. The companies did not share the purchase price.

Velostrata helps companies migrate from on-premises datacenters to the cloud, a common requirement today as companies try to shift more workloads to the cloud. It’s not always a simple matter though to transfer those legacy applications, and that’s where Velostrata could help Google Cloud customers.

As I wrote in 2014 about their debut, the startup figured out a way to decouple storage and compute and that had wide usage and appeal. “The company has a sophisticated hybrid cloud solution that decouples storage from compute resources, leaving the storage in place on-premises while running a virtual machine in the cloud,” I wrote at the time.

But more than that, in a hybrid world where customer applications and data can live in the public cloud or on prem (or a combination), Velostrata gives them control to move and adapt the workloads as needed and prepare it for delivery on cloud virtual machines.

“This means [customers] can easily and quickly migrate virtual machine-based workloads like large databases, enterprise applications, DevOps, and large batch processing to and from the cloud,” Eyal Manor VP of engineering at Google Cloud wrote in the blog post announcing the acquisition.

This of course takes Velostrata from being a general purpose cloud migration tool to one tuned specifically for Google Cloud in the future, but one that gives Google a valuable tool in its battle to gain cloud marketshare.

In the past, Google Cloud head Diane Greene has talked about the business opportunities they have seen in simply “lifting and shifting” data loads to the cloud. This acquisition gives them a key service to help customers who want to do that with the Google Cloud.

Velostrata was founded in 2014. It has raised over $31 million from investors including Intel Capital and Norwest Venture partners.

Intel Capital pumps $72M into AI, IoT, cloud and silicon startups, $115M invested so far in 2018

Intel Capital, the investment arm of the computer processor giant, is today announcing $72 million in funding for the 12 newest startups to enter its portfolio, bringing the total invested so far this year to $115 million. Announced at the company’s global summit currently underway in southern California, investments in this latest tranche cover artificial intelligence, Internet of Things, cloud services, and silicon. A detailed list is below.

Other notable news from the event included a new deal between the NBA and Intel Capital to work on more collaborations in delivering sports content, an area where Intel has already been working for years; and the news that Intel has now invested $125 million in startups headed by minorities, women and other under-represented groups as part of its Diversity Initiative. The mark was reached 2.5 years ahead of schedule, it said.

The range of categories of the startups that Intel is investing in is a mark of how the company continues to back ideas that it views as central to its future business — and specifically where it hopes its processors will play a central role, such as AI, IoT and cloud. Investing in silicon startups, meanwhile, is a sign of how Intel is also focusing on businesses that are working in an area that’s close to the company’s own DNA.

It’s hasn’t been a completely smooth road. Intel became a huge presence in the world of IT and early rise of desktop and laptop computers many years ago with its advances in PC processors, but its fortunes changed with the shift to mobile, which saw the emergence of a new wave of chip companies and designs for smaller and faster devices. Mobile is area that Intel itself acknowledged it largely missed out.

Later years have seen still other issues hit the company. For example, the Spectre security flaw (fixes for which are still being rolled out). And some of the business lines where Intel was hoping to make a mark have not panned out as it hoped they would. Just last month, Intel shut down development of its Vaunt smart glasses and reportedly the entirety of its new devices group.

The investments that Intel Capital makes, in contrast, are a fresher and more optimistic aspect of the company’s operations: they represent hopes and possibilities that still have everything to play for. And given that, on balance, things like AI and cloud services still have a long way to go before being truly ubiquitous, there remains a lot of opportunity for Intel.

“These innovative companies reflect Intel’s strategic focus as a data leader,” said Wendell Brooks, Intel senior vice president and president of Intel Capital, in a statement. “They’re helping shape the future of artificial intelligence, the future of the cloud and the Internet of Things, and the future of silicon. These are critical areas of technology as the world becomes increasingly connected and smart.”

Intel Capital since 1991 has put $12.3 billion into 1,530 companies covering everything from autonomous driving to virtual reality and e-commerce and says that more than 660 of these startups have gone public or been acquired. Intel has organised its investment announcements thematically before: last October, it announced $60 million in 15 big data startups.

Here’s a rundown of the investments getting announced today. Unless otherwise noted, the startups are based around Silicon Valley:

Avaamo is a deep learning startup that builds conversational interfaces based on neural networks to address problems in enterprises — part of the wave of startups that are focusing on non-consumer conversational AI solutions.

Fictiv has built a “virtual manufacturing platform” to design, develop and deliver physical products, linking companies that want to build products with manufacturers who can help them. This is a problem that has foxed many a startup (notable failures have included Factorli out of Las Vegas), and it will be interesting to see if newer advances will make the challenges here surmoutable.

Gamalon from Cambridge, MA, says it has built a machine learning platform to “teaches computers actual ideas.” Its so-called Idea Learning technology is able to order free-form data like chat transcripts and surveys into something that a computer can read, making the data more actionable. More from Ron here.

Reconova out of Xiamen, China is focusing on problems in visual perception in areas like retail, smart home and intelligent security.

Syntiant is an Irvine, CA-based AI semiconductor company that is working on ways of placing neural decision making on chips themselves to speed up processing and reduce battery consumption — a key challenge as computing devices move more information to the cloud and keep getting smaller. Target devices include mobile phones, wearable devices, smart sensors and drones.

Alauda out of China is a container-based cloud services provider focusing on enterprise platform-as-a-service solutions. “Alauda serves organizations undergoing digital transformation across a number of industries, including financial services, manufacturing, aviation, energy and automotive,” Intel said.

CloudGenix is a software-defined wide-area network startup, addressing an important area as more businesses take their networks and data into the cloud and look for cost savings. Intel says its customers use its broadband solutions to run unified communications and data center applications to remote offices, cutting costs by 70 percent and seeing big speed and reliability improvements.

Espressif Systems, also based in China, is a fabless semiconductor company, with its system-on-a-chip focused on IoT solutions.

VenueNext is a “smart venue” platform to deliver various services to visitors’ smartphones, providing analytics and more to the facility providing the services. Hospitals, sports stadiums and others are among its customers.

Lyncean Technologies is nearly 18 years old (founded in 2001) and has been working on something called Compact Light Source (CLS), which Intel describes as a miniature synchrotron X-ray source, which can be used for either extremely detailed large X-rays or very microscopic ones. This has both medical and security applications, making it a very timely business.

Movellus “develops semiconductor technologies that enable digital tools to automatically create and implement functionality previously achievable only with custom analog design.” Its main focus is creating more efficient approaches to designing analog circuits for systems on chips, needed for AI and other applications.

SiFive makes “market-ready processor core IP based on the RISC-V instruction set architecture,” founded by the inventors of RISC-V and led by a team of industry veterans.

Watch Google I/O developer keynote live right here

Google I/O is nowhere near done. While the mainstream keynote just ended, the company is about to unveil the next big things when it comes to APIs, SDKs, frameworks and more.

The developer keynote starts at 12:45 PM Pacific Time (3:45 PM on the East Cost, 8:45 PM in London, 9:45 PM in Paris) and you can watch the live stream right here on this page.

If you’re an Android developer, this is where you’ll get the juicy details about the next version of Android. You can expect new possibilities and developer tools for you and your company. We’ll have a team on the ground to cover the best bits right here on TechCrunch.

Microsoft and Red Hat now offer a jointly managed OpenShift service on Azure

Microsoft and Red Hat are deepening their existing alliance around cloud computing. The two companies will now offer a managed version of OpenShift, Red Hat’s container application platform, on Microsoft Azure. This service will be jointly developed and managed by Microsoft and Red Hat and will be integrated into the overall Azure experience.

Red Hat OpenShift on Azure is meant to make it easier for enterprises to create hybrid container solutions that can span their on-premise networks and the cloud. That’ll give these companies the flexibility to move workloads around as needed and will give those companies that have bet on OpenShift the option to move their workloads close to the rest of Azure’s managed services like Cosmos DB or Microsoft’s suite of machine learning tools.

Microsoft’s Brendan Burns, one of the co-creators of Kubernetes, told me that the companies decided that this shouldn’t just be a service that runs on top of Azure and consumes the Azure APIs. Instead, the companies made the decision to build a native integration of OpenShift into Azure — and specifically the Azure Portal. “This is a first in class fully enterprise-supported application platform for containers,” he said. “This is going to be an experience where enterprises can have all the experience and support they expect.”

Red Hat VP for business development and architecture Mike Ferris echoed this and added that his company is seeing a lot of demand for managed services around containers.

Watch the Microsoft Build 2018 keynote live right here

Microsoft is holding its annual Build developer conference this week and the company is kicking off the event with its inaugural keynote this morning. You can watch the live stream right here.

The keynote is scheduled to start at 8:30 am on the West Coast, 11:30 am on the East Coast, 4:30 pm in London and 5:30 pm in Paris.

This is a developer conference, so you shouldn’t expect new hardware devices. Build is usually focused on all things Windows 10, Azure and beyond. It’s a great way to see where Microsoft is heading. We have a team on the ground, so you can follow all of our coverage on TechCrunch.

Kubernetes stands at an important inflection point

Last week at KubeCon and CloudNativeCon in Copenhagen, we saw an open source community coming together, full of vim and vigor and radiating positive energy as it recognized its growing clout in the enterprise world. Kubernetes, which came out of Google just a few years ago, has gained acceptance and popularity astonishingly rapidly — and that has raised both a sense of possibility and a boat load of questions.

At this year’s European version of the conference, the community seemed to be coming to grips with that rapid growth as large corporate organizations like Red Hat, IBM, Google, AWS and VMware all came together with developers and startups trying to figure out exactly what they had here with this new thing they found.

The project has been gaining acceptance as the defacto container orchestration tool, and as that happened, it was no longer about simply getting a project off the ground and proving that it could work in production. It now required a greater level of tooling and maturity that previously wasn’t necessary because it was simply too soon.

As this has happened, the various members who make up this growing group of users, need to figure out, mostly on the fly, how to make it all work when it is no longer just a couple of developers and a laptop. There are now big boy and big girl implementations and they require a new level of sophistication to make them work.

Against this backdrop, we saw a project that appeared to be at an inflection point. Much like a startup that realizes it actually achieved the product-market fit it had hypothesized, the Kubernetes community has to figure out how to take this to the next level — and that reality presents some serious challenges and enormous opportunities.

A community in transition

The Kubernetes project falls under the auspices of the Cloud Native Computing Foundation (or CNCF for short). Consider that at the opening keynote, CNCF director Dan Kohn was brimming with enthusiasm, proudly rattling off numbers to a packed audience, showing the enormous growth of the project.

Photo: Ron Miller

If you wanted proof of Kubernetes’ (and by extension cloud native computing’s) rapid ascension, consider that the attendance at KubeCon in Copenhagen last week numbered 4300 registered participants, triple the attendance in Berlin just last year.

The hotel and conference center were buzzing with conversation. Every corner and hallway, every bar stool in the hotel’s open lobby bar, at breakfast in the large breakfast room, by the many coffee machines scattered throughout the venue, and even throughout the city, people chatted, debated and discussed Kubernetes and the energy was palpable.

David Aronchick, who now runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days (way back in 2015) and he was certainly surprised to see how big it has become in such a short time.

“I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he said.

Growing up

Yet there was great demand, and with each leap forward and each new level of maturity came a new set of problems to solve, which in turn has created opportunities for new services and startups to fill in the many gaps. As Aparna Sinha, who is the Kubernetes group product manager at Google, said in her conference keynote, enterprise companies want some level of certainty that earlier adopters were willing to forego to take a plunge into the new and exciting world of containers.

Photo: Cloud Native Computing Foundation

As she pointed out, for others to be pulled along and for this to truly reach another level of adoption, it’s going to require some enterprise-level features and that includes security, a higher level of application tooling and a better overall application development experience. All these types of features are coming, whether from Google or from the myriad of service providers who have popped up around the project to make it easier to build, deliver and manage Kubernetes applications.

Sinha says that one of the reasons the project has been able to take off as quickly as it has, is that its roots lie in a container orchestration tool called Borg, which the company has been using internally for years. While that evolved into what we know today as Kubernetes, it certainly required some significant repackaging to work outside of Google. Yet that early refinement at Google gave it an enormous head start over an average open source project — which could account for its meteoric rise.

“When you take something so well established and proven in a global environment like Google and put it out there, it’s not just like any open source project invented from scratch when there isn’t much known and things are being developed in real time,” she said.

For every action

One thing everyone seemed to recognize at KubeCon was that in spite of the head start and early successes, there remains much work to be done, many issues to resolve. The companies using it today mostly still fall under the early adopter moniker. This remains true even though there are some full blown enterprise implementations like CERN, the European physics organization, which has spun up 210 Kubernetes clusters or JD.com, the Chinese Internet shopping giant, which has 20K servers running Kubernetes with the largest cluster consisting of over 5000 servers. Still, it’s fair to say that most companies aren’t that far along yet.

Photo: Ron Miller

But the strength of an enthusiastic open source community like Kubernetes and cloud native computing in general, means that there are companies, some new and some established, trying to solve these problems, and the multitude of new ones that seem to pop up with each new milestone and each solved issue.

As Abbie Kearns, who runs another open source project, the Cloud Foundry Foundation, put it in her keynote, part of the beauty of open source is all those eyeballs on it to solve the scads of problems that are inevitably going to pop up as projects expand beyond their initial scope.

“Open source gives us the opportunity to do things we could never do on our own. Diversity of thought and participation is what makes open source so powerful and so innovative,” she said.

It’s worth noting that several speakers pointed out that diversity of thought also required actual diversity of membership to truly expand ideas to other ways of thinking and other life experiences. That too remains a challenge, as it does in technology and society at large.

In spite of this, Kubernetes has grown and developed rapidly, while benefiting from a community which so enthusiastically supports it. The challenge ahead is to take that early enthusiasm and translate it into more actual business use cases. That is the inflection point where the project finds itself, and the question is will it be able to take that next step toward broader adoption or reach a peak and fall back.

Google Kubeflow, machine learning for Kubernetes, begins to take shape

Ever since Google created Kubernetes as an open source container orchestration tool, it has seen it blossom in ways it might never have imagined. As the project gains in popularity, we are seeing many adjunct programs develop. Today, Google announced the release of version 0.1 of the Kubeflow open source tool, which is designed to bring machine learning to Kubernetes containers.

While Google has long since moved Kubernetes into the Cloud Native Computing Foundation, it continues to be actively involved, and Kubeflow is one manifestation of that. The project was only first announced at the end of last year at Kubecon in Austin, but it is beginning to gain some momentum.

David Aronchick, who runs Kubeflow for Google, led the Kubernetes team for 2.5 years before moving to Kubeflow. He says the idea behind the project is to enable data scientists to take advantage of running machine learning jobs on Kubernetes clusters. Kubeflow lets machine learning teams take existing jobs and simply attach them to a cluster without a lot of adapting.

With today’s announcement, the project begins to move ahead, and according to a blog post announcing the milestone, brings a new level of stability, while adding a slew of new features that the community has been requesting. These include Jupyter Hub for collaborative and interactive training on machine learning jobs and Tensorflow training and hosting support, among other elements.

Aronchick emphasizes that as an open source project you can bring whatever tools you like, and you are not limited to Tensorflow, despite the fact that this early version release does include support for Google’s machine learning tools. You can expect additional tool support as the project develops further.

In just over 4 months since the original announcement, the community has grown quickly with over 70 contributors, over 20 contributing organizations along with over 700 commits in 15 repositories. You can expect the next version, 0.2, sometime this summer.