Live Share in Visual Studio lets you code and debug together

At its Build developer conference, Microsoft today announced that Live Share, its previously announced collaborative development feature for Visual Studio and Visual Studio code is now available to developers who want to give it a try. Until now, this feature, which allows developers to better work together, was only available in a private preview — and it’s available for free to all developers, even those who use the free Visual Studio code editor.

In a way, Live Share is a bit like using Google Docs for collaboration. Developers can see where everybody’s cursor is and when their colleagues are typing, no matter what platform they are on. All developers in a Live Share session can stay within their preferred (and customized) environment. That gives developers quite a bit more flexibility compared to a traditional screen share.

One feature Microsoft heavily emphasized at Build is the ability to share debugging sessions, too. That means everybody can set break points and get the full logs. While writing code is one thing, being able to share debugging sessions may actually be even more important to many developers.

Live Share supports all major languages, including C#, Python, Java, Go and C++.

Without its own phone OS, Microsoft now focuses on its Android Launcher and new ‘Your Phone’ experience

Microsoft may have retreated from the smartphone operating system wars but that doesn’t mean it has given up on trying to get a foothold on other platforms. Today, at its Build developer conference, the company announced three new services that bring its overall cross-platform strategy into focus.

On Android, the company’s Trojan Horse has long been the Microsoft Launcher, which is getting support for the Windows Timeline feature. In addition to that, Microsoft also today announced the new “Your Phone” experience that lets Windows Users answer text messages right from their desktops, share photos from their phones and see and respond to notifications (though that name, we understand, is not final and may still change). The other cornerstone of this approach is the Edge browser, which will soon become the home of Timeline on iOS, where Microsoft can’t offer a launcher-like experience.

There are a couple of things to unpack here. Central to the overall strategy is Timeline, a new feature that launched with the latest Windows 10 update and that allows users to see what they last worked on and which sites they recently browsed and then move between devices to pick up where they left off. For Timeline to fulfill its promise, developers have to support it and as of now, it’s mostly Microsoft’s own apps that will show up in the Timeline, making it only marginally interesting. Given enough surfaces to highlight this feature, though, developers will likely want to implement it — and since doing so doesn’t take a ton of work, chances are quite a few third-party applications will soon support it.

On Android, the Microsoft Launcher will soon support Timeline for cross-device application launching. This means that if you are working on a document in Word on your desktop, you’ll see that document in your Timeline on Android and you’ll be able to continue working on it in the Word Android app with a single tap.

Kevin Gallo, Microsoft’s head of the Windows developer platform, tells me that if you don’t have the right app installed yet, the Launcher will help you find it in the Google Play store.

With this update, Microsoft is also giving enterprises more reasons to install the Launcher. IT admins can now manage the Launcher and control what applications show up there.

On iOS, Microsoft’s home for the Timeline will be the Edge browser. I’d be surprised if Microsoft didn’t decide to launch a stand-alone Timeline app at some point in the future. It probably wants to encourage more use of Edge on iOS right now, but in the long run, I’m not sure that’s the right strategy.

The new Your Phone service is another part of the strategy (though outside of Timeline) and its focus is on both consumers and business users (though there is often no clear line between those anyway). This new feature will start rolling out in the Windows Insider Program soon and it’ll basically replicate some of the functionality that you may be familiar with from apps like Pushbullet. Besides mirroring notifications and allowing you to respond to text messages, it’ll also allow you to move photos between your phone and Windows 10 machines. Oddly, Microsoft doesn’t mention other file types in its materials, though it’ll likely support those, too.

Going forward, we’ll likely see Microsoft embrace a wider range of these experiences as it looks to extend its reach into third-party platforms like Android.

Watch the Microsoft Build 2018 keynote live right here

Microsoft is holding its annual Build developer conference this week and the company is kicking off the event with its inaugural keynote this morning. You can watch the live stream right here.

The keynote is scheduled to start at 8:30 am on the West Coast, 11:30 am on the East Coast, 4:30 pm in London and 5:30 pm in Paris.

This is a developer conference, so you shouldn’t expect new hardware devices. Build is usually focused on all things Windows 10, Azure and beyond. It’s a great way to see where Microsoft is heading. We have a team on the ground, so you can follow all of our coverage on TechCrunch.

Kubernetes stands at an important inflection point

Last week at KubeCon and CloudNativeCon in Copenhagen, we saw an open source community coming together, full of vim and vigor and radiating positive energy as it recognized its growing clout in the enterprise world. Kubernetes, which came out of Google just a few years ago, has gained acceptance and popularity astonishingly rapidly — and that has raised both a sense of possibility and a boat load of questions.

At this year’s European version of the conference, the community seemed to be coming to grips with that rapid growth as large corporate organizations like Red Hat, IBM, Google, AWS and VMware all came together with developers and startups trying to figure out exactly what they had here with this new thing they found.

The project has been gaining acceptance as the defacto container orchestration tool, and as that happened, it was no longer about simply getting a project off the ground and proving that it could work in production. It now required a greater level of tooling and maturity that previously wasn’t necessary because it was simply too soon.

As this has happened, the various members who make up this growing group of users, need to figure out, mostly on the fly, how to make it all work when it is no longer just a couple of developers and a laptop. There are now big boy and big girl implementations and they require a new level of sophistication to make them work.

Against this backdrop, we saw a project that appeared to be at an inflection point. Much like a startup that realizes it actually achieved the product-market fit it had hypothesized, the Kubernetes community has to figure out how to take this to the next level — and that reality presents some serious challenges and enormous opportunities.

A community in transition

The Kubernetes project falls under the auspices of the Cloud Native Computing Foundation (or CNCF for short). Consider that at the opening keynote, CNCF director Dan Kohn was brimming with enthusiasm, proudly rattling off numbers to a packed audience, showing the enormous growth of the project.

Photo: Ron Miller

If you wanted proof of Kubernetes’ (and by extension cloud native computing’s) rapid ascension, consider that the attendance at KubeCon in Copenhagen last week numbered 4300 registered participants, triple the attendance in Berlin just last year.

The hotel and conference center were buzzing with conversation. Every corner and hallway, every bar stool in the hotel’s open lobby bar, at breakfast in the large breakfast room, by the many coffee machines scattered throughout the venue, and even throughout the city, people chatted, debated and discussed Kubernetes and the energy was palpable.

David Aronchick, who now runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days (way back in 2015) and he was certainly surprised to see how big it has become in such a short time.

“I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he said.

Growing up

Yet there was great demand, and with each leap forward and each new level of maturity came a new set of problems to solve, which in turn has created opportunities for new services and startups to fill in the many gaps. As Aparna Sinha, who is the Kubernetes group product manager at Google, said in her conference keynote, enterprise companies want some level of certainty that earlier adopters were willing to forego to take a plunge into the new and exciting world of containers.

Photo: Cloud Native Computing Foundation

As she pointed out, for others to be pulled along and for this to truly reach another level of adoption, it’s going to require some enterprise-level features and that includes security, a higher level of application tooling and a better overall application development experience. All these types of features are coming, whether from Google or from the myriad of service providers who have popped up around the project to make it easier to build, deliver and manage Kubernetes applications.

Sinha says that one of the reasons the project has been able to take off as quickly as it has, is that its roots lie in a container orchestration tool called Borg, which the company has been using internally for years. While that evolved into what we know today as Kubernetes, it certainly required some significant repackaging to work outside of Google. Yet that early refinement at Google gave it an enormous head start over an average open source project — which could account for its meteoric rise.

“When you take something so well established and proven in a global environment like Google and put it out there, it’s not just like any open source project invented from scratch when there isn’t much known and things are being developed in real time,” she said.

For every action

One thing everyone seemed to recognize at KubeCon was that in spite of the head start and early successes, there remains much work to be done, many issues to resolve. The companies using it today mostly still fall under the early adopter moniker. This remains true even though there are some full blown enterprise implementations like CERN, the European physics organization, which has spun up 210 Kubernetes clusters or JD.com, the Chinese Internet shopping giant, which has 20K servers running Kubernetes with the largest cluster consisting of over 5000 servers. Still, it’s fair to say that most companies aren’t that far along yet.

Photo: Ron Miller

But the strength of an enthusiastic open source community like Kubernetes and cloud native computing in general, means that there are companies, some new and some established, trying to solve these problems, and the multitude of new ones that seem to pop up with each new milestone and each solved issue.

As Abbie Kearns, who runs another open source project, the Cloud Foundry Foundation, put it in her keynote, part of the beauty of open source is all those eyeballs on it to solve the scads of problems that are inevitably going to pop up as projects expand beyond their initial scope.

“Open source gives us the opportunity to do things we could never do on our own. Diversity of thought and participation is what makes open source so powerful and so innovative,” she said.

It’s worth noting that several speakers pointed out that diversity of thought also required actual diversity of membership to truly expand ideas to other ways of thinking and other life experiences. That too remains a challenge, as it does in technology and society at large.

In spite of this, Kubernetes has grown and developed rapidly, while benefiting from a community which so enthusiastically supports it. The challenge ahead is to take that early enthusiasm and translate it into more actual business use cases. That is the inflection point where the project finds itself, and the question is will it be able to take that next step toward broader adoption or reach a peak and fall back.

Lobe’s ridiculously simple machine learning platform aims to empower non-technical creators

Machine learning may be the tool de jour for everything from particle physics to recreating the human voice, but it’s not exactly the easiest field to get into. Despite the complexities of video editing and sound design, we have UIs that let even a curious kid dabble in them — why not with machine learning? That’s the goal of Lobe, a startup and platform that genuinely seems to have made AI models as simple to put together as LEGO bricks.

I talked with Mike Matas, one of Lobe’s co-founders and the designer behind many a popular digital interface, about the platform and his motivations for creating it.

“There’s been a lot of situations where people have kind of thought about AI and have these cool ideas, but they can’t execute them,” he said. “So those ideas just like shed, unless you have access to an AI team.”

This happened to him, too, he explained.

“I started researching because I wanted to see if I could use it myself. And there’s this hard to break through veneer of words and frameworks and mathematics — but once you get through that the concepts are actually really intuitive. In fact even more intuitive than regular programming, because you’re teaching the machine like you teach a person.”

But like the hard shell of jargon, existing tools were also rough on the edges — powerful and functional, but much more like learning a development environment than playing around in Photoshop or Logic.

“You need to know how to piece these things together, there are lots of things you need to download. I’m one of those people who if I have to do a lot of work, download a bunch of frameworks, I just give up,” he said. “So as a UI designer I saw the opportunity to take something that’s really complicated and reframe it in a way that’s understandable.”

Lobe, which Matas created with his co-founders Markus Beissinger and Adam Menges, takes the concepts of machine learning, things like feature extraction and labeling, and puts them in a simple, intuitive visual interface. As demonstrated in a video tour of the platform, you can make an app that recognizes hand gestures and matches them to emoji without ever seeing a line of code, let alone writing one. All the relevant information is there, and you can drill down to the nitty gritty if you want, but you don’t have to. The ease and speed with which new applications can be designed and experimented with could open up the field to people who see the potential of the tools but lack the technical know-how.

He compared the situation to the early days of PCs, when computer scientists and engineers were the only ones who knew how to operate them. “They were the only people able to use them, so they were they only people able to come up with ideas about how to use them,” he said. But by the late ’80s, computers had been transformed into creative tools, largely because of improvements to the UI.

Matas expects a similar flood of applications, even beyond the many we’ve already seen, as the barrier to entry drops.

“People outside the data science community are going to think about how to apply this to their field,” he said, and unlike before, they’ll be able to create a working model themselves.

A raft of examples on the site show how a few simple modules can give rise to all kinds of interesting applications: reading lips, tracking positions, understanding gestures, generating realistic flower petals. Why not? You need data to feed the system, of course, but doing something novel with it is no longer the hard part.

And in keeping with the machine learning community’s commitment to openness and sharing, Lobe models aren’t some proprietary thing you can only operate on the site or via the API. “Architecturally we’re built on top of open standards like Tensorflow,” Matas said. Do the training on Lobe, test it and tweak it on Lobe, then compile it down to whatever platform you want and take it to go.

Right now the site is in closed beta. “We’ve been overwhelmed with responses, so clearly it’s resonating with people,” Matas said. “We’re going to slowly let people in, it’s going to start pretty small. I hope we’re not getting ahead of ourselves.”

Google Kubeflow, machine learning for Kubernetes, begins to take shape

Ever since Google created Kubernetes as an open source container orchestration tool, it has seen it blossom in ways it might never have imagined. As the project gains in popularity, we are seeing many adjunct programs develop. Today, Google announced the release of version 0.1 of the Kubeflow open source tool, which is designed to bring machine learning to Kubernetes containers.

While Google has long since moved Kubernetes into the Cloud Native Computing Foundation, it continues to be actively involved, and Kubeflow is one manifestation of that. The project was only first announced at the end of last year at Kubecon in Austin, but it is beginning to gain some momentum.

David Aronchick, who runs Kubeflow for Google, led the Kubernetes team for 2.5 years before moving to Kubeflow. He says the idea behind the project is to enable data scientists to take advantage of running machine learning jobs on Kubernetes clusters. Kubeflow lets machine learning teams take existing jobs and simply attach them to a cluster without a lot of adapting.

With today’s announcement, the project begins to move ahead, and according to a blog post announcing the milestone, brings a new level of stability, while adding a slew of new features that the community has been requesting. These include Jupyter Hub for collaborative and interactive training on machine learning jobs and Tensorflow training and hosting support, among other elements.

Aronchick emphasizes that as an open source project you can bring whatever tools you like, and you are not limited to Tensorflow, despite the fact that this early version release does include support for Google’s machine learning tools. You can expect additional tool support as the project develops further.

In just over 4 months since the original announcement, the community has grown quickly with over 70 contributors, over 20 contributing organizations along with over 700 commits in 15 repositories. You can expect the next version, 0.2, sometime this summer.

Amazon opens up in-skill purchases to all Alexa developers

Amazon today launched in-skill purchasing to all Alexa developers, along with Amazon Pay for skills. That means developers have a way to generate revenue from their voice applications on Alexa-powered devices, like Amazon’s Echo speakers. For example, developers could charge for additional packs to go along with their voice-based games, or offer other premium content to expand their free voice app experience.

The feature was previously announced in November 2017, but was only available at the time to a small handful of voice app developers, like Jeopardy!, plus other game publishers.

When in-skill purchasing is added to a voice application – Amazon calls these apps Alexa’s “skills” – customers can ask to shop the purchase suggestions offered, and then pay by voice using the payment information already associated with their Amazon account.

Developers are in control of what content is offered at which price, but Amazon will handle the actual purchasing flow. It also offers self-serve tools to help developers manage their in-skill purchases and optimize their sales.

While any Alexa device owner can buy the available in-skill purchases, Amazon Prime members will get the best deal.

Amazon says that in-skill purchases must offer some sort of value-add for Prime subscribers, like a discounted price, exclusive content or early access. Developers are paid 70 percent of the list price for their in-skill purchase, before any Amazon discount is applied.

Already, Sony’s Jeopardy!, Teen Jeopardy!, Sports Jeopardy!; The Ellen Show’s Heads Up; Fremantle’s Match Game; HISTORY’s Ultimate HISTORY Quiz, and TuneIn Live, have launched Alexa skills with premium content.

To kick off today’s launch of general availability, Amazon is announcing a handful of others who will do the same. This includes NBCU’s SYFY WIRE, which will offer three additional weekly podcasts exclusive to Alexa (Geeksplain, Debate Club, and Untold Story); Volley Inc.’s Yes Sire, which offers an expansion pack for its role-playing game; and Volley Inc.’s Word of the Day, which will soon add new vocabulary packs to purchase.

In-skill purchases is only one of the ways that Amazon offers a way for developers to generate revenue.

The company is also now offering a way for brands and merchants to sell products and services (like event tickets or flower delivery) through Alexa, using Amazon Pay for Alexa Skills. Amazon Pay integrates with existing CRM and order management solutions, says Amazon, allowing merchants to manage sales in their current process. This is generally available as of today, too.

And it’s been paying top developers directly through its Developer Rewards program, which is an attempt to seed the ecosystem with skills ahead of a more robust system for skill monetization.

The news was announced alongside an update on Alexa’s skill ecosystem, which has 40,000 skills available, up from 25,000 last December.

However, the ecosystem today has a very long tail. Many of the skills are those with few or even no users, or just represent apps from those toying around with voice app development. Research on how customers are actually engaging with their voice devices has shown that generally, people are largely using them for things like news and information, smart home control, and setting timers and reminders – not necessarily things that require voice apps.

Facebook tool warns developers of phishing attacks dangling lookalike domains

Phishing seems like a problem that will be here for the long haul, so I welcome any tools to combat it with open arms. Today Facebook announced one: a service for domain owners or concerned users that watches for sketchy versions of web addresses that might indicate a phishing attempt in the offing.

The developer only needs to specify the domain name they care about and our tool will take care of the rest,” explained Facebook security engineer David Huang. “For example, if you subscribe to phishing alerts for a legitimate domain ‘facebook.com,’ we’ll alert you when we detect a potential phishing domain like ‘facebook.com.evil.com’ and other malicious variations as we see them.”

Hosting your phishing website as a subdomain of evil.com seems like kind of a giveaway. But there are subtler ways to fool people. If someone wanted to make you think that an email was coming from this website, for instance, they might register something like techcrunch-support.com or techcrunch.official.site and send it from there.

Hi Peter.

Small variations in spelling work, too: would you notice that an email came from techcruhch.com or techcrunoh.com if you were on your phone, walking down the street and trying not to be hit by people riding electric scooters? I think not. Back in the day even CrouchGear might have worked.

And lookalike characters that render differently inline are a strange new threat: whɑtsɑpp.com has an alpha (or something) instead of an a, and helpfully renders as xn—whtspp-cxcc.com. Look, I didn’t design the system. I just use it.

The tool looks for all these variations in domains it encounters by watching the stream of certificates being issued to new domains. “We have been using these logs to monitor certificates issued for domains owned by Facebook and have created tools to help developers take advantage of the same approach,” reads the Facebook blog post. Nice of them!

Developers can sign up here and submit domains they’d like to monitor. Facebook won’t do anything but alert you that it detected something weird, so if there’s a false positive you don’t need to worry about getting kicked off your domain. On the other hand, if scammers are setting up shop at a doppelgänger web address, you’ll have to do the legwork yourself to get it shut down and warn your own users to be on the lookout.

Facebook animates photo-realistic avatars to mimic VR users’ faces

Facebook wants you to look and move like you in VR, even if you’ve got a headset strapped to your face in the real world. That’s why it’s building a new technology that uses a photo to map someone’s face into VR, and sensors to detect facial expressions and movements to animate that avatar so it looks like you without an Oculus on your head.

CTO Mike Schroepfer previewed the technology during his day 2 keynote at Facebook’s F8 conference. Eventually, this technology could let you bring your real-world identity into VR so you’re recognizable by friends. That’s critical to VR’s potential to let us eradicate the barriers of distance and spend time in the same “room” with someone on the other side of the world. These social VR experiences will fall flat without emotion that’s obscured by headsets or left out of static avatars. But if Facebook can port your facial expressions alongside your mug, VR could elicit similar emotions to being with someone in person.

Facebook has been making steady progress on the avatar front over the years. What began as a generic blue face eventually got personalized features, skin tones and life-like features, and became a polished and evocative digital representation of a real person. Still, they’re not quite photo-realistic.

Facebook is inching closer, though, by using hand-labeled characteristics on portraits of people’s faces to train its artificial intelligence how to turn a photo into an accurate avatar.

Meanwhile, Facebook has tried to come up with new ways to translate emotion into avatars. Back in late 2016, Facebook showed off its “VR emoji gestures,” which let users shake their fists to turn their avatar’s face mad, or shrug their shoulders to adopt a confused expression.

Still, the biggest problem with Facebook’s avatars is that they’re trapped in its worlds of Oculus and social VR. In October, I called on Facebook to build a competitor to Snapchat’s wildly popular Bitmoji avatars, and we’re still waiting.

VR headsets haven’t seen the explosive user adoption some expected, in large part because they lack enough compelling experiences inside. There are zombie shooters and puzzle rooms and shipwrecks to explore, but most tire of them quickly. Games and media lose their novelty in a way social networking doesn’t. Imagine what you were playing or watching 14 years ago, yet we’re still using Facebook.

That’s why the company needs to nail emotion within VR. It’s the key to making the medium impactful and addictive.

Google revamps its Google Maps developer platform

Google is launching a major update to its Google Maps API platform for developers today — and it’s also giving it a new name: the Google Maps Platform.

This is one of the biggest changes to the platform in recent years and it’ll greatly simplify the Google Maps developer offerings and how Google charges for access to those APIs, though starting June 11, all Google Maps developers will have to have valid API key and a Google Cloud Platform billing account, too.

As part of this new initiative, Google is combining the 18 individual Maps APIs the company currently offers into only three core products: Maps, Routes and Places. The good news for developers here is that Google promises their existing code will continue to work without any changes.

As part of this update, Google is also changing how it charges for access to these APIs. It now offers a single pricing plan with access to free support. Currently, Google offers both a Standard and Premium plan (where the premium plan included access to support, for example), but going forward, it’ll only offer a single one, which also provides developers with $200 worth of free monthly usage. As usual, there are also bespoke pricing plans for enterprise customers.

As Google also today announced, the company plans to continue to launch various Maps-centric industry-specific solutions. Earlier this year, the company launched a program for game developers who want to build real-world games on Maps data, for example, and today it announced similar solutions for asset tracking and ridesharing. Lyft already started using the ridesharing product in its app last year.

“Our asset tracking offering helps businesses improve efficiencies by locating vehicles and assets in real-time, visualizing where assets have traveled, and routing vehicles with complex trips,” the Maps team writes in today’s announcement. “We expect to bring new solutions to market in the future, in areas where we’re positioned to offer insights and expertise.”

Overall, the Google Maps team seems to be moving in the right direction here. Google Maps API access has occasionally been a divisive issue, especially during times when Google changed its free usage levels. Today’s change likely won’t create this kind of reaction from the developer community since it’ll likely make life for developers easier in the long run.