‘Post-reality’ video of CG imagery projected on a dancing man at high framerates

Not sure what there is to add to the headline, really. Well, I guess I should probably explain a bit.

Back in 2016 (on my birthday in fact) researchers from the University of Tokyo posted an interesting video showing a projector and motion tracking system working together to project an image onto moving, deforming surfaces like a flapping piece of paper or dancing person’s shirt.

Panasonic one-upped this with a more impressive display the next year, but the original lab has clapped back with a new video (spotted by New Atlas) that combines the awkwardness of academia with the awkwardness of dancing alone in the dark. And a quote from “The Matrix.”

Really though, it’s quite cool. Check out the hardware:

This dynamic projection mapping system, which they call DynaFlash v2, operates at 947 frames per second, using a depth-detection system running at the same rate to determine exactly where the image needs to be.

Not only does this let an image follow a person’s movement and orientation, but deformations in the material, such as stretching or the natural contortions of the body when moving.

The extreme accuracy of this process makes for strange possibilities. As Ishikawa Watanabe, the leader of the lab, puts it:

The capacity of the dynamic projection mapping linking these components is not limited to fusing colorful unrealistic texture to reality. It can freely reproduce gloss and unevenness of non-existing materials by adaptively controlling the projected image based on the three-dimensional structure and motion of the applicable surface.

Perhaps it’s easier to show you:

Creepy, right? It’s using rendering techniques most often seen in games to produce the illusion that there’s light shining on non-existent tubes on the dancer’s body. The illusion is remarkably convincing.

It’s quite a different approach to augmented reality, and while I can’t see it in many living rooms, it’s clearly too cool to go unused — expect this to show up in a few cool demos from tech companies and performance artists or musicians. I can’t wait to see what Watanabe comes up with next.

Autonomous cars could peep around corners via bouncing laser

Autonomous cars gather up tons of data about the world around them, but even the best computer vision systems can’t see through brick and mortar. But by carefully monitoring the reflected light of a laser bouncing off a nearby surface, they might be able to see around corners — that’s the idea behind recently published research from Stanford engineers.

The basic idea is one we’ve seen before: It’s possible to discern the shape of an object on the far side of an obstacle by shining a laser or structured light on a surface nearby and analyzing how the light scatters. Patterns emerge when some pulses return faster than others, or are otherwise changed by having interacted with the unseen object.

It isn’t easy to do. Reflected laser light can easily be lost in the noise of broad daylight, for instance. And if you want to reconstruct a model of the object precise enough to tell whether it’s a person or a stop sign, you need a lot of data and the processing power to crunch that data.

It’s this second problem that the Stanford researchers, from the school’s Computational Imaging Group, address in a new paper published in Nature.

“Despite recent advances, [non-line-of-sight] imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light,” the abstract reads in part.

“A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3-D structure of the hidden object from the noisy measurements,” said grad student David Lindell, co-author of the paper, in a Stanford news release.

The data collection process still takes a long time, as the laser scans across a surface — think a couple minutes to an hour, though that’s still on the low side for this type of technique. The photons do their thing, bouncing around the other side, and some make it back to nearby their point of origin, where they are picked up by a high-sensitivity detector.

The detector sends its data on to a computer, which processes it using the precious algorithm created by the researchers. Their work allows this part to proceed extremely quickly, reconstructing the scene in relatively high fidelity with just a second or two of processing.

The white model is the actual shape of the unseen item, and the green mesh is what the system detected (from only one side, of course).

The resulting system is also less susceptible to interference, allowing it to be used in indirect sunlight.

Example of a sign reconstructed from being lasered outside.

Of course, it’s not much use detecting a person on the far side of a wall if it takes an hour to do so. But the laser setups used by the researchers are very different from the high-speed scanning lasers found in lidar systems. And the algorithm they built should be compatible with those, which could vastly reduce the data acquisition time.

“We believe the computation algorithm is already ready for LIDAR systems,” said Matthew O’Toole, co-lead author of the paper (with lab leader Gordon Wetzstein). “The key question is if the current hardware of LIDAR systems supports this type of imaging.”

If their theory is correct, then this algorithm could soon enable existing lidar systems to analyze their data in a new way, potentially spotting a moving car or person approaching an intersection before it’s even visible. It’ll be a while still, but at this point it’s just a matter of smart engineering.

Autonomous cars could peep around corners via bouncing laser

 Autonomous cars gather up tons of data about the world around them, but even the best computer vision systems can’t see through brick and mortar. But by carefully monitoring the reflected light of a laser bouncing off a nearby surface, they might be able to see around corners — that’s the idea behind recently published research from Stanford engineers. Read More