How we scaled our startup by being remote first

Startups are often associated with the benefits and toys provided in their offices. Foosball tables! Free food! Dog friendly! But what if the future of startups was less about physical office space and more about remote-first work environments? What if, in fact, the most compelling aspect of a startup work environment is that the employees don’t have to go to one?

A remote-first company model has been Seeq’s strategy since our founding in 2013. We have raised $35 million and grown to more than 100 employees around the globe. Remote-first is clearly working for us and may be the best model for other software companies as well.

So, who is Seeq and what’s been the key to making the remote-first model work for us?  And why did we do it in the first place?

Seeq is a remote-first startup – i.e. it was founded with the intention of not having a physical headquarters or offices, and still operates that way – that is developing an advanced analytics application that enables process engineers and subject matter experts in oil & gas, pharmaceuticals, utilities, and other process manufacturing industries to investigate and publish insights from the massive amounts of sensor data they generate and store.

To succeed, we needed to build a team quickly with two skill sets: 1) software development expertise, including machine learning, AI, data visualization, open source, agile development processes, cloud, etc. and 2) deep domain expertise in the industries we target.

Which means there is no one location where we can hire all the employees we need: Silicon Valley for software, Houston for oil & gas, New Jersey for fine chemicals, Seattle for cloud expertise, water utilities across the country, and so forth. But being remote-first has made recruiting and hiring these high-demand roles easier much easier than if we were collocated.

Image via Seeq Corporation

Job postings on remote-specific web sites like FlexJobs, Remote.co and Remote OK typically draw hundreds of applicants in a matter of days. This enables Seeq to hire great employees who might not call Seattle, Houston or Silicon Valley home – and is particularly attractive to employees with location-dependent spouses or employees who simply want to work where they want to live.

But a remote-first strategy and hiring quality employees for the skills you need is not enough: succeeding as a remote-first company requires a plan and execution around the “3 C’s of remote-first”.

The three requirements to remote-first success are the three C’s: communication, commitment and culture.

This robot learns its two-handed moves from human dexterity

If robots are really to help us out around the house or care for our injured and elderly, they’re going to want two hands… at least. But using two hands is harder than we make it look — so this robotic control system learns from humans before attempting to do the same.

The idea behind the research, from the University of Wisconsin-Madison, isn’t to build a two-handed robot from scratch, but simply to create a system that understands and executes the same type of manipulations that we humans do without thinking about them.

For instance, when you need to open a jar, you grip it with one hand and move it into position, then tighten that grip as the other hand takes hold of the lid and twists or pops it off. There’s so much going on in this elementary two-handed action that it would be hopeless to ask a robot to do it autonomously right now. But that robot could still have a general idea of why this type of manipulation is done on this occasion, and do what it can to pursue it.

The researchers first had humans wearing motion capture equipment perform a variety of simulated everyday tasks, like stacking cups, opening containers and pouring out the contents, and picking up items with other things balanced on top. All this data — where the hands go, how they interact and so on — was chewed up and ruminated on by a machine learning system, which found that people tended to do one of four things with their hands:

  • Self-handover: This is where you pick up an object and put it in the other hand so it’s easier to put it where it’s going, or to free up the first hand to do something else.
  • One hand fixed: An object is held steady by one hand providing a strong, rigid grip, while the other performs an operation on it like removing a lid or stirring the contents.
  • Fixed offset: Both hands work together to pick something up and rotate or move it.
  • One hand seeking: Not actually a two-handed action, but the principle of deliberately keeping one hand out of action while the other finds the object required or performs its own task.

The robot put this knowledge to work not in doing the actions itself — again, these are extremely complex motions that current AIs are incapable of executing — but in its interpretations of movements made by a human controller.

You would think that when a person is remotely controlling a robot, it would just mirror the person’s movements exactly. And in the tests, the robot does so to provide a baseline of how without knowledge about these “bimanual actions,” but many of them are simply impossible.

Think of the jar-opening example. We know that when we’re opening the jar, we have to hold one side steady with a stronger grip and may even have to push back with the jar hand against the movement of the opening hand. If you tried to do this remotely with robotic arms, that information is not present any more, and the one hand will likely knock the jar out of the grip of the other, or fail to grip it properly because the other isn’t helping out.

The system created by the researchers recognizes when one of the four actions above is happening, and takes measures to make sure that they’re a success. That means, for instance, being aware of the pressures exerted on each arm by the other when they pick up a bucket together. Or providing extra rigidity to the arm holding an object while the other interacts with the lid. Even when only one hand is being used (“seeking”), the system knows that it can deprioritize the movements of the unused hand and dedicate more resources (be it body movements or computational power) to the working hand.

In videos of demonstrations, it seems clear that this knowledge greatly improves the success rate of the attempts by remote operators to perform a set of tasks meant to simulate preparing a breakfast: cracking (fake) eggs, stirring and shifting things, picking up a tray with glasses on it and keeping it level.

Of course this is all still being done by a human, more or less — but the human’s actions are being augmented and re-interpreted into something more than simple mechanical reproduction.

Doing these tasks autonomously is a long ways off, but research like this forms the foundation for that work. Before a robot can attempt to move like a human, it has to understand not just how humans move, but why they do certain things in certain circumstances and, furthermore, what important processes may be hidden from obvious observation — things like planning the hand’s route, choosing a grip location and so on.

The Madison team was led by Daniel Rakita; their paper describing the system is published in the journal Science Robotics.

Talkspace picks up $50 million Series D

Talkspace, the platform that lets patients and therapists communicate online, has today announced the close of a $50 million financing round led by Revolution Growth. Existing investors, such as Norwest Venture Partners, Omura Capital, Spark Capital and Compound Ventures, are also participating in the round.

As part of the deal, Revolution Growth’s Patrick Conroy will join the Talkspace board of directors.

Talkspace launched back in 2012 with a mission to make therapy accessible to as many people as possible. The platform allows users to pay a subscription fee for unlimited messaging with one of the company’s 5,000 healthcare professionals. Since launch, Talkspace has rolled out products specific to certain users, such as teenagers or couples.

The company also partners with insurance providers and employers to offer Talkspace services to their members/employees as part of a commercial business. Today, Talkspace has announced a partnership with Optum Health. This expands Talkspace’s commercial reach to 5 million people.

According to the release, Talkspace will use the funding to accelerate the growth of its commercial business.

Here’s what Talkspace CEO and co-founder Oren Frank had to say in a prepared statement:

Our advanced capabilities in data science enable us to not only open access to therapy, but also identify the attributes of successful therapeutic relationships and apply that knowledge throughout the predictive products we build, to the therapists that use our platform, and in the content we provide.

This brings Talkspace’s total funding to $110 million.