Robots learn to grab and scramble with new levels of agility

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

Ubiquity6 acquires AR music startup Wavy

Today, Ubiquity6 has announced that it is acquiring Wavy, a small AR music startup founded last year.

In a blog post, the Wavy team confirmed they’ll be joining the Ubiquity6 team and won’t be continuing their work on the Wavy app. “When we met the team at Ubiquity6, it became apparent that joining the team there would be a leap forward towards our shared mission of enabling creators to edit reality,” the post reads.

Wavy’s app sought to give musicians an outlet to bring concerts into phone-based AR users’ living rooms.

The tight team of three joins Ubiquity6 after what was generally a rough year for the consumer-focused AR industry. While the number of supported devices climbed, the actual user base didn’t see much growth. A lot of the progress came in the platform tools, such as Ubiquity6; the startup closed a $27 million Series B led by Benchmark and Index Ventures in August. The company now has just shy of 40 employees.

The Wavy app shares some essential DNA with what Ubiquity6 is looking to build. The app allows people to drop 3D objects into spaces and upload videos of the “music experiences” unfolding in front of them. It’s very fundamental stuff, but at its base level asks questions about how 3D content can interact with spaces and people and how those new environments change the context of the art and music.

This fits into Ubiquity6’s idea of a spatial internet, where users can stumble upon 3D environments where AR content lives based on where they are and what their phone camera is seeing. The company hasn’t launched widely, but had a pilot program with the SFMOMA last year and also announced they are working with Disney.

We chatted with Ubiquity6 CEO Anjney Midha at TechCrunch Disrupt SF 2018 about the opportunities and challenges that lie ahead for the consumer-focused AR industry.

Driving down the cost of preserving genetic material, Acorn Biolabs raises $3.3 million

Acorn Biolabs wants consumers to pay them to store genetic material in a bet that the increasing advances in targeted genetic therapies will yield better healthcare results down the line.

The company’s pitch is to “Save young cells today, live a longer, better, tomorrow.” It’s a gamble on the frontiers of healthcare technology that has managed to net the company $3.3 million in seed financing from some of Canada’s busiest investors.

For the Toronto-based company, the pitch isn’t just around banking genetic material — a practice that’s been around for years — it’s about making that process cheaper and easier.

Acorn has come up with a way to collect and preserve the genetic material contained in hair follicles, giving its customers a way to collect full-genome information at home rather than having to come in to a facility and getting bone marrow drawn (the practice at one of its competitors, Forever Labs) .

“We have developed a proprietary media that cells are submerged in that maintains the viability of those cells as they’re being transported to our labs for processing,” says Acorn Biolabs chief executive Dr. Drew Taylor.

“Rapid advancements in the therapeutic use of cells, including the ability to grow human tissue sections, cartilage, artificial skin and stem cells, are already being delivered. Entire heart, liver and kidneys are really just around the corner. The urgency around collecting, preserving and banking youthful cells for future use is real and freezing the clock on your cells will ensure you can leverage them later when you need them,” Taylor said in a statement.

Typically, the cost of banking a full genome test is roughly $2,000 to $3,000, and Acorn says they can drop that cost to less than $1,000. Beyond the cost of taking the sample and storing it, Acorn says it will reduce to roughly $100 a year the fees to store such genetic materials.

It’s important to note that healthcare doesn’t cover any of this. It’s a voluntary service for those neurotic enough or concerned enough about the future of healthcare and their potential health. 

There’s also no services that Acorn will provide on the back end of the storage… yet.

What people do need to realize is that there is power with that data that can improve healthcare. Down the road we will be able to use that data to help people collect that data and power studies,” says Taylor. 

The $3.3 million the company raised came from Real Ventures, Globalive Technology, Pool Global Partners and Epic Capital Management and other undisclosed investors.

“Until now, any live cell collection solutions have been highly expensive, invasive and often painful, as well as being geographically limited to specialized clinics,” said Anthony Lacavera, founder and chairman at Globalive. “Acorn is an industry-leading example of how technology can bring real innovation to enable future healthcare solutions that will have meaningful impact on people’s wellbeing and longevity, while at the same time — make it easy, affordable and frictionless for everyone.”

We Company CEO in hot water over being both a tenant and a landlord

The company formerly known as WeWork has come under scrutiny for potential conflict of interest issues regarding CEO Adam Neumann’s partial ownership of three properties where WeWork is (or will be) a tenant. TechCrunch has seen excerpts of the company’s prospectus for investors that details upwards of $100 million in total future rents WeWork will pay to properties owned, in part, by Adam Neumann.

In March 2018, The Real Deal reported that Neumann had purchased a 50 percent stake in 88 University Place alongside fashion designer Elie Tahari. That property was then leased by WeWork, which then leased space within the building to IBM.

Today, the WSJ is reporting that 88 University Place isn’t alone. Neumann also personally invested in properties in San Jose that are either currently leased to WeWork as a tenant or are earmarked for such a purpose. Unlike 88 University, where Neumann is a 50/50 owner with Tahari, the CEO of the We Company — as WeWork is now known — invested in the two San Jose properties as part of a real estate consortium and owns a smaller stake of an unspecified percentage.

These transactions were all disclosed in the company prospectus documents it filed as part of its $700 million bond sale in April 2018. According to the prospectus, WeWork’s total future rents on these properties (partially owned by Neumann) are $110.8 million, as of December 2017.

That doesn’t include the reported $65 million purchase of a Chelsea property by Neumann and partners, which is said to be earmarked for a new WeLive space built from the ground up. That, too, will be subject to rent payments from the We Company to run WeLive out of it.

This raises questions of whether there is a conflict of interest in Neumann being both the landlord and the tenant of properties through WeWork. The WSJ says that investors of the company are concerned that the CEO could personally benefit on rents or other terms with the company in these deals.

According to WeWork, however, the company has not been made aware of any issues by any of its investors about related party transactions or their disclosures. The company also said that the majority of the Board are independent of Adam and all of these transactions were approved.

A WeWork spokesperson also had this to say: “WeWork has a review process in place for related party transactions. Those transactions are reviewed and approved by the board, and they are disclosed to investors.”

As it stands now, The We Company is privately held and in the midst of a transition as it contemplates how to turn a substantial profit on its more than 400 property assets across the world. The company is taking a broad-stroke approach, serving tiny startups and massive corporate clients alike, while also offering co-living WeLive spaces to renters and building out the Powered By We platform to spread its bets.

The company is valued at a hefty $47 billion, even after a scaled back investment from SoftBank (which went from $16 billion to $2 billion). But as the We Company inches toward an IPO, we may start to see a call for tighter corporate governance and more scrutiny of potential conflicts of interest.

Nvidia’s T4 GPUs are now available in beta on Google Cloud

Google Cloud today announced that Nvidia’s Turing-based Tesla T4 data center GPUs are now available in beta in its data centers in Brazil, India, Netherlands, Singapore, Tokyo and the United States. Google first announced a private test of these cards in November, but that was a very limited alpha test. All developers can now take these new T4 GPUs for a spin through Google’s Compute Engine service.

The T4, which essentially uses the same processor architecture as Nvidia’s RTX cards for consumers, slots in-between the existing Nvidia V100 and P4 GPUs on the Google Cloud Platform . While the V100 is optimized for machine learning, though, the T4 (as its P4 predecessor) is more of a general-purpose GPU that also turns out to be great for training models and inferencing.

In terms of machine and deep learning performance, the 16GB T4 is significantly slower than the V100, though if you are mostly running inference on the cards, you may actually see a speed boost. Unsurprisingly, using the T4 is also cheaper than the V100, starting at $0.95 per hour compared to $2.48 per hour for the V100, with another discount for using preemptible VMs and Google’s usual sustained use discounts.

Google says that the card’s 16GB memory should easily handle large machine learning models and the ability to run multiple smaller models at the same time. The standard PCI Express 3.0 card also comes with support for Nvidia’s Tensor Cores to accelerate deep learning and Nvidia’s new RTX ray-tracing cores. Performance tops out at 260 TOPS and developers can connect up to four T4 GPUs to a virtual machine.

It’s worth stressing that this is also the first GPU in the Google Cloud lineup that supports Nvidia’s ray-tracing technology. There isn’t a lot of software on the market yet that actually makes use of this technique, which allows you to render more lifelike images in real time, but if you need a virtual workstation with a powerful next-generation graphics card, that’s now an option.

With today’s beta launch of the T4, Google Cloud now offers quite a variety of Nvidia GPUs, including the K80, P4, P100 and V100, all at different price points and with different performance characteristics.