This robot maintains tender, unnerving eye contact

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Revcontent is trying to get rid of misinformation with help from the Poynter Institute

CEO John Lemp recently said that thanks to a new policy, publishers in Revcontent‘s content recommendation network “won’t ever make a cent” on false and misleading stories — at least, not from the network.

To achieve this, the company is relying on fact-checking provided by the Poynter Institute’s International Fact Checking Network. If any two independent fact checkers from International Fact Checking flag a story from the Revcontent network as false, the company’s widget will be removed, and Revcontent will not pay out any money on that story (not even revenue earned before the story was flagged).

In some ways, Revcontent’s approach to fighting fake news and misinformation sounds similar to the big social media companies — Lemp, like Twitter, has said his company cannot be the “arbiter of truth,” and like Facebook, he’s emphasizing the need to remove the financial incentives for posting sensationalistic-but-misleading stories.

However, Lemp (who’s spoken in the past about using content recommendations to reduce publishers’ reliance on individual platforms) criticized the big internet companies for “arbitrarily” taking down content in response to “bad PR.” In contrast, he said Revcontent will have a fully transparent approach, one that removes the financial rewards for fake news without silencing anyone.

Lemp didn’t mention any specific takedowns, but the big story these days is Infowars. It seems like nearly everyone has been cracking down on Alex Jones’ far-right, conspiracy-mongering site, removing at least some Infowars-related accounts and content in the past couple of weeks.

The Infowars story also raises the question of whether you can effectively fight fake news on a story-by-story basis, rather than completely cutting off publishers when they’ve shown themselves to consistently post misleading or falsified stories.

When asked about this, Lemp said Revcontent also has the option to completely removing publishers from the network, but he said he views that as a “last resort.”

‘Unhackable’ BitFi crypto wallet has been hacked

The BitFi crypto wallet was supposed to be unhackable and none other than famous weirdo John McAfee claimed that the device – essentially an Android-based mini tablet – would withstand any attack. Spoiler alert: it couldn’t.

First, a bit of background. The $120 device launched at the beginning of this month to much fanfare. It consisted of a device that McAfee claimed contained no software or storage and was instead a standalone wallet similar to the Trezor. The website featured a bold claim by McAfee himself, one that would give a normal security researcher pause:

Further, the company offered a bug bounty that seems to be slowly being eroded by outside forces. They asked hackers to pull coins off of a specially prepared $10 wallet, a move that is uncommon in the world of bug bounties. They wrote:

We deposit coins into a Bitfi wallet
If you wish to participate in the bounty program, you will purchase a Bitfi wallet that is preloaded with coins for just an additional $10 (the reason for the charge is because we need to ensure serious inquiries only)
If you successfully extract the coins and empty the wallet, this would be considered a successful hack
You can then keep the coins and Bitfi will make a payment to you of $250,000
Please note that we grant anyone who participates in this bounty permission to use all possible attack vectors, including our servers, nodes, and our infrastructure

Hackers began attacking the device immediately, eventually hacking it to find the passphrase used to move crypto in and out of the the wallet. In a detailed set of tweets, security researchers Andrew Tierney and Alan Woodward began finding holes by attacking the operating system itself. However, this did not match the bounty to the letter, claimed BitFi, even though they did not actually ship any bounty-ready devices.

Then, to add insult to injury, the company earned a Pwnies award at security conference Defcon. The award was given for worst vendor response. As hackers began dismantling the device, BitFi went on the defensive, consistently claiming that their device was secure. And the hackers had a field day. One hacker, 15-year-old Saleem Rashid, was able to play Doom on the device.

The hacks kept coming. McAfee, for his part, kept refusing to accept the hacks as genuine.

Unfortunately, the latest hack may have just fulfilled all of BitFi’s requirements. Rashid and Tierney have been able to pull cash out of the wallet by hacking the passphrase, a primary requirement for the bounty. “We have sent the seed and phrase from the device to another server, it just gets sent using netcat, nothing fancy.” Tierney said. “We believe all conditions have been met.”

The end state of this crypto mess? BitFi did what most hacked crypto companies do: double down on the threats. In a recently deleted Tweet they made it clear that they were not to be messed with:

The researchers, however, may still have the last laugh.

StarVR’s One headset flaunts eye-tracking and a double-wide field of view

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.

That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)

On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.

In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.

To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.

It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.

The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels.

Let’s do the math: 16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.

That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.

The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.

Unfortunately, the demos of eye-tracking were pretty limited (think a heat map of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.

One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.

The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.

Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.