Playbuzz becomes Ex.co and expands its content marketing platform

Playbuzz, a startup that helps publishers to add things like polls and galleries to their articles, has rebranded itself as Ex.co.

Co-founder and CEO Tom Pachys told me the name stands for “the experience company,” and he said it reflects the company’s broader content marketing ambitions. Ex.co will continue working with news publishers, but Pachys said there’s a bigger market for what the company has built.

“We’re seeing businesses wanting to become publishers in a way, to interact with their users in a way that’s very similar to what a publisher does,” Pachys said.

Playbuzz/Ex.co is hardly the first publishing startup realize that there may be more money in content marketing, but Pachys argued that this isn’t just a sudden pivot. After all, the company is already working with clients like Visa, Red Bull and Netflix (as well as our corporate siblings at The Huffington Post).

“The previous name does not reflect the values that we stand for today — not even future values,” he said.

Tom Pachys

Tom Pachys

Pachys also suggested that existing content marketing tools are largely focused on operations and workflow — things like hiring the right freelancer — while Ex.co aims at making it easier to actually create the content.

“We’re the ones innovate within the core — not around it, but the core itself,” he said. “And rather than trying to call them competition, we want to integrate with as much players in the ecosystem as possible.”

In addition to announcing the rebrand, Ex.co is also relaunching its platform as a broader content marketing tool, with new features like content templates, real-time analytics and lead generation.

Pachys, by the way, is new to the CEO role, having served as COO until recently, while previous Playbuzz CEO Shaul Olmert has become the company’s president. Pachys said the move wasn’t “directly correlated” with the other changes, and instead allows the two of them to focus on their strengths — Pachys oversees day-to-day operations, while Olmert focuses on investor relations and strategic deals.

“I co-founded the company with Shaul, who’s a very good friend of mine, we’ve known each other 20 years,” Pachys said. “Shaul is very much involved in the company.”

18 months after acquisition, MuleSoft is integrating more deeply into Salesforce

A year and a half after getting acquired by Salesforce for $6.5 billion, MuleSoft is beginning to resemble a Salesforce company — using its language and its methodologies to describe new products and services. This week at Dreamforce, as the company’s mega customer conference begins in San Francisco, MuleSoft announced a slew of new services as it integrates more deeply into the Salesforce family of products.

MuleSoft creates APIs to connect different systems together. This could be quite useful for Salesforce as a bridge between older software that may be on-prem or in the cloud. It allows Salesforce and its customers to access data wherever it lives, even from different parts of the Salesforce ecosystem itself.

MuleSoft made a number of announcements designed to simplify that process and put it in the hands of more customers. For starters, it’s announcing Accelerators, which are pre-defined integrations that let companies connect more easily to other systems. Not surprisingly, two of the first ones connect data from external products and services to Salesforce Service Cloud and Salesforce Commerce Cloud.

“What we’ve done is we’ve pre-built integrations to common back-end systems like ServiceNow and JIRA in Service Cloud, and we prebuilt those integrations, and then automatically connected that data and services through a Salesforce Lightning component directly in the Service console,” Lindsey Irvine, chief marketing officer at MuleSoft, explained.

What this does is allow the agent to get a more complete view of the customer by getting not just the data that’s stored in Salesforce, but in other systems as well.

The company also wants to put these kinds of integration skills in the hands of more Salesforce customers, so they have designed a set of courses in Trailhead, the company’s training platform, with the goal of helping 100,000 Salesforce admins, developers, integration architects and line of business users develop expertise around creating and managing these kinds of integrations.

The company is also putting resources into creating the API Community Manager, a place where people involved in building and managing these integrations can get help from a community of users, all built on Salesforce products and services, says Mark Dao, chief product officer at MuleSoft.

“We’re leveraging Community Cloud, Service Cloud and Marketing Cloud to create a true developer experience platform. And what’s interesting is that it’s targeting both the business users — in other words, business development teams and marketing teams — as well as external developers,” he said. He added that the fact this is working with business users as well as the integration experts is something new, and the goal is to drive increased usage of APIs using MuleSoft inside Salesforce customer organizations.

Finally, the company announced Flow Designer, a new tool fueled by Einstein AI, which helps automate the creation of workflows and integrations between systems in a more automated fashion without requiring coding skills.

MuleSoft Flow Designer requires no coding (Screenshot: MuleSoft)

Dao says this is about putting MuleSoft in reach of more users. “It’s about enabling use cases for less technical users in the context of the MuleSoft Anypoint Platform. This really requires a new way of thinking around creating integrations, and we’ve been making Flow Designer simpler and simpler, and removing that technical layer from those users,” he said.

API Community Manager is available now. Accelerators will be available by the end of the year and Flow Designer updates will be available Q2 2020, according to the company.

These and other features are all designed to take some of the complexity out of using MuleSoft to help connect various systems across the organization, including both Salesforce and external programs, to make use of data wherever it lives. MuleSoft does requires a fair bit of technical skill, so if the company is able to simplify integration tasks, it could help put it in the hands of more users.

Ubiquity6’s Display.land is part 3D scanner, part social network

The world is being mapped in 3D — one brick, one bench, one building at a time. For things like hyper-accurate augmented reality, autonomous robots and self-driving cars, 2D maps and GPS only get you so far.

Apple is building its map with lasers strapped to the tops of cars. Niantic has talked about building 3D maps of parks and public spaces by way of user-submitted imagery. The Army is making 3D maps with drones.

Ubiquity6, a startup that’s spent much of the last two years quietly chipping away at the challenges of building shared augmented reality experiences, is trying something different: a social network, of sorts, for scanning and sharing 3D spaces.

The company’s first publicly launched app, Display.land, started rolling out on iOS and Android over the weekend. Part 3D scanner and part social network, it lets you scan a location or object, edit it (cropping it to just the bits you’re interested in, or adding pre-built digital objects), and share it with the world. Want everyone to see it? You can pin a scan to a map, allowing anyone panning by to explore your scan. Want to keep it to yourself? Flip the privacy toggle accordingly.

The idea: quick and simple 3D scans of real-world spaces, shareable at large or just with the people you choose. Exploring a new city and found some neat art in an alleyway? Scan it and post it for everyone to “walk” around. Renting an apartment and want to give potential tenants some idea of what the space is like? Scan it, put the link in the listing and it’ll open right up in their browser without any downloads.

Starting a new scan is simple: hit the “new” button, find some particularly interesting bit of geometry to focus on and hit “begin.” As you radiate away from the initial focal point, you’ll see your camera view filling with countless colored spheres. Each sphere represents a geometric feature that the camera has captured, helping to highlight the areas that have been sufficiently covered.

As you roam, a bar starts to stretch across the bottom of your screen. Once it seems like you’ve captured enough geometry for a complete mesh, the app will let you know — but if you want your scan to be more true to life, you’re free to continue scanning until the bar is completely full.

Between the point cloud data and all of the photographic textures being captured, these scans can get pretty big. My test scans were coming in at a few hundred megabytes. That’ll eat up your data quick if you’re uploading over a cell network, so you’ve got the option to hold off uploading until you’re back on Wi-Fi. Once uploaded, Ubiquity6 will take a few minutes to process everything, crunching all of the raw data into a model you can fly around and explore.

While the scans it makes are rarely perfect, it’s… really damned wild what they’re pulling off with just your phone’s RGB camera and its assorted built-in sensors. With a bit of practice and sufficient lighting, the scans it can pull off are rather incredible. Check out this scan of Ron Mueck’s Mask II sculpture from the SF MOMA, for example — or this pool from a skatepark in SF’s Mission District. (And note that it’s all rendering live in the browser; you can scroll to zoom, orbit around, etc.)

Scanning/editing/sharing is free. If you’re feeling fancy, you can even open your scan in a browser and download it in a file format (OBJ, PLY, or GLTF) that’s ready to be fiddled with in your desktop 3D modeling software of choice. As for how they’ll make money? The company plans to charge companies that need a bit more than the base offering — if a company wants to 3D scan a space at the highest possible fidelity, for example, they can pay extra for the added processing time.

Meanwhile, they’re laying the groundwork for what seems to be the company’s actual interest: shared, multiplayer augmented reality experiences. For now, these scans are mostly static — you can add cutesy 3D models like treasure chests and floating butterflies to mix it up a bit, but they’re mostly there just to be pretty. In time, though, they’re looking to add gaming elements; think games that automatically unlock when you walk into a certain physical space, with physics and functionality determined by the real-world geometry around you.

Ubiquity6 has raised a little over $37 million to date. It’s backed by KPCB, First Round, Index Ventures, Benchmark and Gradient, and was part of Disney’s fifth accelerator class.

Luko raises $22 million to improve home insurance

French startup Luko has raised a $22 million Series A round led by Accel (€20 million). Founders Fund and Speedinvest are also participating in today’s funding round.

When you rent a place in France, you have to provide a certificate to your landlord saying that you are covered with a home insurance product. And, of course, you might want to insure your place if you own it.

While the market is huge, legacy insurance companies still dominate it. That’s why Luko wants to shake things up in three different ways.

First, it’s hard to sign up to home insurance in France. It usually involves a lot of emails, a printer, some signatures, etc. It can quickly add up if you want to change your coverage level or add some options.

As expected, Luko’s signup process is pretty straightforward. You fill out a form on the company’s website and you get an insurance certificate minutes later.

Luko partners with La Parisienne Assurances to issue insurance contracts. So far, 15,000 people have signed up to Luko.

Second, if there’s some water damage or a fire, it can take a lot of time to get it fixed. Worse, if somebody breaks into your place, you’re not going to get your money back that quickly.

Luko wants to speed things up. You can make a claim via chat, over the phone or with a video call using the mobile app. The company tries its best to detect fraud and pay a claim as quickly as possible. Luko also recently announced an integration with Lydia, a popular peer-to-peer payment app in France, so that your payment is instant.

Third, Luko has a bold vision to make home insurance even more effective. The startup wants to detect issues before it’s too late. For instance, you could imagine receiving a water meter from Luko to detect leaks, or a door sensor to detect when somebody is trying to get in. We’ll find out if people actually want to put connected objects everywhere.

Finally, Luko has partnered with a handful of nonprofits to redistribute some of its revenue — it has received the BCorp certification. The startup makes revenue by taking a flat fee on your monthly subscription. If there’s money left at the end of the year, Luko donates it to charities. Investors signed a pledge so that Luko doesn’t trade this model for growth.

Intel and Argonne National Lab on ‘exascale’ and their new Aurora supercomputer

The scale of supercomputing has grown almost too large to comprehend, with millions of compute units performing calculations at rates requiring, for first time, the exa prefix — denoting quadrillions per second. How was this accomplished? With careful planning… and a lot of wires, say two people close to the project.

Having noted the news that Intel and Argonne National Lab were planning to take the wrapper off a new exascale computer called Aurora (one of several being built in the U.S.) earlier this year, I recently got a chance to talk with Trish Damkroger, head of Intel’s Extreme Computing Organization, and Rick Stevens, Argonne’s associate lab director for computing, environment and life sciences.

The two discussed the technical details of the system at the Supercomputing conference in Denver, where, probably, most of the people who can truly say they understand this type of work already were. So while you can read at industry journals and the press release about the nuts and bolts of the system, including Intel’s new Xe architecture and Ponte Vecchio general-purpose compute chip, I tried to get a little more of the big picture from the two.

It should surprise no one that this is a project long in the making — but you might not guess exactly how long: more than a decade. Part of the challenge, then, was to establish computing hardware that was leagues beyond what was possible at the time.

“Exascale was first being started in 2007. At that time we hadn’t even hit the petascale target yet, so we were planning like three to four magnitudes out,” said Stevens. “At that time, if we had exascale, it would have required a gigawatt of power, which is obviously not realistic. So a big part of reaching exascale has been reducing power draw.”

Intel’s supercomputing-focused Xe architecture is based on a 7-nanometer process, pushing the very edge of Newtonian physics — much smaller and quantum effects start coming into play. But the smaller the gates, the less power they take, and microscopic savings add up quickly when you’re talking billions and trillions of them.

But that merely exposes another problem: If you increase the power of a processor by 1000x, you run into a memory bottleneck. The system may be able to think fast, but if it can’t access and store data equally fast, there’s no point.

“By having exascale-level computing, but not exabyte-level bandwidth, you end up with a very lopsided system,” said Stevens.

And once you clear both those obstacles, you run into a third: what’s called concurrency. High performance computing is equally about synchronizing a task between huge numbers of computing units as it is about making those units as powerful as possible. The machine operates as a whole, and as such every part must communicate with every other part — which becomes something of a problem as you scale up.

“These systems have many thousands of nodes, and the nodes have hundreds of cores, and the cores have thousands of computation units, so there’s like, billion-way concurrency,” Stevens explained. “Dealing with that is the core of the architecture.”

How they did it, I, being utterly unfamiliar with the vagaries of high performance computing architecture design, would not even attempt to explain. But they seem to have done it, as these exascale systems are coming online. The solution, I’ll only venture to say, is essentially a major advance on the networking side. The level of sustained bandwidth between all these nodes and units is staggering.

Making exascale accessible

While even in 2007 you could predict that we’d eventually reach such low-power processes and improved memory bandwidth, other trends would have been nearly impossible to predict — for example, the exploding demand for AI and machine learning. Back then it wasn’t even a consideration, and now it would be folly to create any kind of high performance computing system that wasn’t at least partially optimized for machine learning problems.

“By 2023 we expect AI workloads to be a third of the overall HPC server market,” said Damkroger. “This AI-HPC convergence is bringing those two workloads together to solve problems faster and provide greater insight.”

To that end the architecture of the Aurora system is built to be flexible while retaining the ability to accelerate certain common operations, for instance the type of matrix calculations that make up a great deal of certain machine learning tasks.

“But it’s not just about performance, it has to be about programmability,” she continued. “One of the big challenges of an exacale machine is being able to write software to use that machine. oneAPI is going to be a unified programming model — it’s based on an open standard of Open Parallel C++, and that’s key for promoting use in the community.”

Summit, as of this writing the most powerful single computing system in the world, is very dissimilar to many of the systems developers are used working on. If the creators of a new supercomputer want it to have broad appeal, they need to bring it as close to being like a “normal” computer to operate as possible.

“It’s something of a challenge to bring x86-based packages to Summit,” Stevens noted. “The big advantage for us is that, because we have x86 nodes and Intel GPUs, this thing is basically going to run every piece of software that exists. It’ll run standard software, Linux software, literally millions of apps.”

I asked about the costs involved, since it’s something of a mystery with a system like this how that a half-billion dollar budget gets broken down. Really I just thought it would be interesting to know how much of it went to, say, RAM versus processing cores, or how many miles of wire they had to run. Though both Stevens and Damkroger declined to comment, the former did note that “the backlink bandwidth on this machine is many times the total of the entire internet, and that does cost something.” Make of that what you will.

Aurora, unlike its cousin El Capitan at Lawrence Livermore National Lab, will not be used for weapons development.

“Argonne is a science lab, and it’s open, not classified science,” said Stevens. “Our machine is a national user resource; We have people using it from all over the country. A large amount of time is allocated via a process that’s peer reviewed and priced to accommodate the most interesting projects. About two thirds is that, and the other third Department of Energy stuff, but still unclassified problems.”

Initial work will be in climate science, chemistry, and data science, with 15 teams between them signed up for major projects to be run on Aurora — details to be announced soon.