Reverie Labs uses new machine learning algorithms to fix drug development bottlenecks

Developing new medicines can take years of research and cost millions of dollars before they are even ready for clinical trials. Several biotech startups are using machine learning to revolutionize the process and get drugs into pharmacies more quickly. One of the newest is called Reverie Labs, which is part of Y Combinator’s latest batch. The Boston-based company wants to fix a critical bottleneck in the drug development process by speeding up the process of identifying promising molecules using recently published machine learning algorithms.

Reverie Labs’ founders Connor Duffy, Ankit Gupta and Jonah Kallenbach, who named their company after a pivotal detail in the HBO series “Westworld,” explain that its tech analyzes early ideas for molecules from pharmaceutical scientists and suggests possible improvements to shorten the amount of time it takes to reach clinical trials. Duffy says Reverie Labs’ ambition is to “become a full service molecule-as-a-service company.” It’s already partnered with several biotech companies and academic institutes working on treatments for diseases including influenza and cancer.

Reverie Labs specializes in the lead development stage, which is when researchers focus on prioritizing and optimizing molecules so they can go to animal and human clinical trials more quickly. Pharmaceutical scientists need to first identify the proteins that cause a disease and then find molecular compounds that can bind to those proteins. Then it becomes a process of elimination as they narrow down those molecules to ones that not only create the results they want, but are also suitable for animal and human studies.

Before clinical trials can start, however, they need to evaluate molecules very carefully in order to understand things like how they are metabolized by the body and their potential toxicity.

“I’ve heard it compared to juggling eight balls at once or playing whack-a-mole,” says Duffy. “You want your compound to be very safe before you put it in people, you want to be efficacious and go where you want it to go in your body and you don’t want side effects. There are a lot of problems drug companies need to think about before putting a molecule in a human, and when you fix one problem, you often come up with another problem. We want to alleviate that by looking at all problems at the same time.”

Lead development is very labor intensive and requires the work of many medicinal chemists. Reverie Labs’ founders say it often takes more than $100 million and two years per drug before a final selection of molecules are ready for clinical trials. Reverie Labs wants to set itself apart from other startups focused on solving the same problem by taking recently-discovered machine learning techniques, and applying them to drug development.

“The machine learning algorithms we implemented are some of the most promising advances that have been published in the past couple of years,” says Kallenbach.

First, molecules are “featurized,” or turned into representations that work with machine learning algorithms. Reverie Labs’s tech creates proprietary featurizations based on quantum chemical calculations, then uses them to analyze the molecules’ properties and how they may act in the body. Afterwards, it selects molecules that have the potential to do well in clinical trials or suggests new molecules based on what properties scientists need.

In addition to the machine learning algorithms it uses, Reverie Labs founders say one of the startup’s key differentiators is that it trains its models on customers’ proprietary in-house datasets, which means the tech can integrate more smoothly into existing drug development workflows. Reverie Labs’ software also runs on customers’ virtual private clouds, giving them more security.

While using artificial intelligence to develop new drugs seemed almost like science fiction just a few years ago, the space is developing quickly. Last month, BenevolentAI, one of the first companies to apply deep learning to drug discovery, bought biotech company Promixagen’s operations in the United Kingdom, which it says will make it the first artificial intelligence company to cover the entire drug research and development process. Atomwise, another AI-based drug discovery startup, announced at the beginning of this month that it has raised a $45 million Series A. Other notable startups include Nimbus Therapeutics and Recursion Pharmaceuticals.

The process of creating new drugs is currently very complicated, slow and extremely expensive. With so much room for improvement, the work done by various AI-based startups to improve the process don’t necessarily overlap.

“The space doesn’t seem like a zero sum game at all,” says Gupta. “Many players can be involved and the fact that other startups are interested shows that there is legitimacy to the technology.”

“The end result is trying to delivery life-saving cures faster and more cheaply,” adds Duffy. “We don’t really feel any competitiveness. We want everyone to succeed.”

Equity podcast: Theranos’s reckoning, BroadQualm’s stunning conclusion and Lyft’s platform ambitions

Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast where we unpack the numbers behind the headlines.

This week Katie Roof and I were joined by Mayfield Fund’s Navin Chaddha, an investor with early connections with Lyft to talk about, well, Lyft — as well as two bombshell news events in the form of an SEC fine for Theranos and Broadcom’s hostile takeover efforts for Qualcomm hitting the brakes. Alex Wilhelm was not present this week but will join us again soon (we assume he was tending to his Slayer shirt collection).

Starting off with Lyft, there was quite a bit of activity for Uber’s biggest competitor in North America. The ride-sharing startup (can we still call it a startup?) said it would be partnering with Magna to “co-develop” an autonomous driving system. Chaddha talks a bit about how Lyft’s ambitions aren’t to be a vertical business like Uber, but serve as a platform for anyone to plug into. We’ve definitely seen this play out before — just look at what happened with Apple (the closed platform) and Android (the open platform). We dive in to see if Lyft’s ambitions are actually going to pan out as planned. Also, it got $200 million out of the deal.

Next up is Theranos, where the SEC investigation finally came to a head with founder Elizabeth Holmes and former president Ramesh “Sunny” Balwani were formally charged by the SEC for fraud. The SEC says the two raised more than $700 million from investors through an “elaborate, years-long fraud in which they exaggerated or made false statements about the company’s technology, business, and financial performance.” You can find the full story by TechCrunch’s Connie Loizos here, and we got a chance to dig into the implications of what it might mean for how investors scope out potential founders going forward. (Hint: Chaddha says they need to be more careful.)

Finally, BroadQualm is over. After months of hand-wringing over whether or not Broadcom would buy — and then commit a hostile takeover — of the U.S. semiconductor giant, the Trump administration blocked the deal. A cascading series of events associated with the CFIUS, a government body, got it to the point where Broadcom’s aggressive dealmaker Hock Tan dropped plans to go after Qualcomm altogether. The largest deal of all time in tech will, indeed, not be happening (for now), and it has potentially pretty big implications for M&A going forward.

That’s all for this week, we’ll catch you guys next week. Happy March Madness, and may fortune favor* your brackets.

Equity drops every Friday at 6:00 am PT, so subscribe to us on Apple PodcastsOvercast, Pocketcast, Downcast and all the casts.

assuming you have Duke losing before the elite 8.

With great tech success, comes even greater responsibility

As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

This was a running theme this week at the South by Southwest conference in Austin, Texas.

The AI debate rages on

While the platform plays are clearly on the front lines of this discussion, tech icon Elon Musk repeated his concerns about AI running amok in a Q&A at South by Southwest. He worries that it won’t be long before we graduate from the narrow (and not terribly smart) AI we have today to a more generalized AI. He is particularly concerned that a strong AI could develop and evolve over time to the point it eventually matches the intellectual capabilities of humans. Of course, as TechCrunch’s Jon Shieber wrote, Musk sees his stable of companies as a kind of hedge against such a possible apocalypse.

Elon Musk with Jonathan Nolan at South by Southwest 2018. Photo: Getty Images/Chris Saucedo

“Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is,” he told the South by Southwest audience.

He went so far as to suggest it could be more of a threat than nuclear warheads in terms of the kind of impact it could have on humanity.

Taking responsibility

Whether you agree with that assessment or not, or even if you think he is being somewhat self-serving with his warnings to promote his companies, he could be touching upon something important about corporate responsibility around the technology that startups and established companies alike should heed.

It was certainly on the mind of Apple’s Eddy Cue, who was interviewed on stage at SXSW by CNN’s Dylan Byers this week. “Tech is a great thing and makes humans more capable, but in of itself is not for good. People who make it, have to make it for good,” Cue said.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.

Apple’s Eddie Cue speaking at South by Southwest 2018. Photo: Ron Miller

Cue dodged any of Byers’ questions about competing platforms, saying he could only speak to what Apple was doing because he didn’t have an inside view of companies like Facebook and Google (which he didn’t ever actually mention by name). “I think our company is different than what you’re talking about. Our customers’ privacy is of utmost importance to us,” he said. That includes, he said, limiting the amount of data they collect because they are not worrying about having enough to serve more meaningful ads. “We don’t care where you shop or what you buy,” he added.

Andy O’Connell from Facebook’s Global Policy Development team, speaking on a panel on the challenges of using AI to filter “fake news” said, that Facebook recognizes it can and should play a role if it sees people manipulating the platform. “This is a whole society issue, but there are technical things we are doing and things we can invest in [to help lessen the impact of fake news],” he said. He added that Facebook co-founder and CEO Mark Zuckerberg has expressed it as challenge to the company to make the platform more secure and that includes reducing the amount of false or misleading news that makes it onto the platform.

Recognizing tech’s limitations

As O’Connell put forth, this is not just a Facebook problem or a general technology problem. It’s a social problem and society as a whole needs to address it. Sometimes tech can help, but, we can’t always look to tech to solve every problem. The trouble is that we can never really anticipate how a given piece of technology will behave or how people use it once we put it out there.

Photo: Ron Miller

All of this suggests that none of these problems, some of which we never could have never have even imagined, are easy to solve. For every action and reaction, there can be another set of unintended consequences, even with the best of intentions.

But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society. “Everyone has a responsibility [to draw clear lines]. It is something we do and how we want to run our company. In today’s world people have to take responsibility and we intend to do that,” Cue said.

It’s got to be more than lip service though. It requires thought and care and reacting when things do run amok, while continually assessing the impact of every decision.