Russian agents used Facebook to influence the 2017 election. Congress missed the chance to delve into what the company knows about it—and how they’ll stop it in 2018.
Facebook didn’t notify the Federal Trade Commission when it learned that Cambridge Analytica had improperly obtained personal information of users.
If you were hoping to get some sweet drone footage of a NASCAR race in progress, you may find your quadcopter grounded unceremoniously by a mysterious force: DroneShield is bringing its anti-drone tech to NASCAR events at the Texas Motor Speedway.
The company makes a handful of products, all aimed at detecting and safely intercepting drones that are flying where they shouldn’t. That’s a growing problem, of course, and not just at airports or Area 51. A stray drone at a major sporting event could fall and interrupt the game, or strike someone, or at a race it may even cause a major accident.
Most recently it introduced a new version of its handheld “DroneGun,” which scrambles the UAV’s signal so that it has no choice but to safely put itself down, as these devices are generally programmed to do. You can’t buy one — technically, they’re illegal — but the police sure can.
Recently DroneShield’s tech was deployed at the Commonwealth Games in Brisbane and at the Olympics in PyeongChang, and now the company has announced that it was tapped by a number of Texas authorities for the protection of stock car races.
“We are proud to be able to assist a high-profile event like this,” said Oleg Vornik, DroneShield’s CEO, in an email announcing the news. “We also believe that this is significant for DroneShield in that this is the first known live operational use of all three of our key products – DroneSentinel, DroneSentry and DroneGun – by U.S. law enforcement.”
It’s a big get for a company that clearly saw an opportunity in the growing drone market (in combating it, really) and executed well on it.
In his second day of Congressional testimony, Democrats wanted to know about privacy. Republicans wanted to hear about Diamond and Silk.
We’ve trained machine learning systems to identify objects, navigate streets and recognize facial expressions, but as difficult as they may be, they don’t even touch the level of sophistication required to simulate, for example, a dog. Well, this project aims to do just that — in a very limited way, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.
Why do this? Well, although much work has been done to simulate the sub-tasks of perception like identifying an object and picking it up, little has been done in terms of “understanding visual data to the extent that an agent can take actions and perform tasks in the visual world.” In other words, act not as the eye, but as the thing controlling the eye.
And why dogs? Because they’re intelligent agents of sufficient complexity, “yet their goals and motivations are often unknown a priori.” In other words, dogs are clearly smart, but we have no idea what they’re thinking.
As an initial foray into this line of research, the team wanted to see if by monitoring the dog closely and mapping its movements and actions to the environment it sees, they could create a system that accurately predicted those movements.
In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units (on the legs, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.
They recorded many hours of activities — walking in various environments, fetching things, playing at a dog park, eating — syncing the dog’s movements to what it saw. The result is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.
This agent, given certain sensory input — say a view of a room or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious level of detail, of course — but even just figuring out how to move its body and to where is a pretty major task.
“It learns how to move the joints to walk, learns how to avoid obstacles when walking or running,” explained Hessam Bagherinezhad, one of the researchers, in an email. “It learns to run for the squirrels, follow the owner, track the flying dog toys (when playing fetch). These are some of the basic AI tasks in both computer vision and robotics that we’ve been trying to solve by collecting separate data for each task (e.g. motion planning, walkable surface, object detection, object tracking, person recognition).”
That can produce some rather complex data: For example, the dog model must know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or cars, or (depending on the house) couches. So the model learns that as well, and this can be deployed separately as a computer vision model for finding out where a pet (or small legged robot) can get to in a given image.
This was just an initial experiment, the researchers say, with success but limited results. Others may consider bringing in more senses (smell is an obvious one) or seeing how a model produced from one dog (or many) generalizes to other dogs. They conclude: “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world.”
Give the internet some credit: people watched hours upon hours of testimony just to get some jokes off.
Gwynne Shotwell tells the TED conference that plans to take humans to Mars are “risk reduction for the human species.”