Researchers Create “Near-Exhaustive,” Ultra-Realistic Cloth Simulation (TechCrunch.com)

This looks pretty great! I don’t think this can translate to MMORPGs anytime soon though as this one cloth cape took six months. It’d be virtually impossible to pre-compute everything a player could potentially do with their character in every possible armor type on every possible mount, etc. However, it does show us where cloth animation is heading and how far it’s progressed just in the past couple of years.

******************************

Cloth is hard to simulate yet it’s important in gaming, scientific analysis, and CGI. That’s why scientists at Berkely and Carnegie Mellon have spent six months exhaustively exploring all of the possible configurations of a single cloth robe on a cute little animated figure, thereby reducing error and creating some of the nicest simulated cloth you’ll see today. They report on their findings in a paper that will be released at SIGGRAPH today.
“The criticism of data-driven techniques has always been that you can’t pre-compute everything,” said Adrien Treuille, associate professor of computer science and robotics at Carnegie Mellon. “Well, that may have been true 10 years ago, but that’s not the way the world is anymore.”

The cloth you see above is made of 29,000 vertices and rendered at 60 frames per second. It flows and moves just like real cloth and because all possible motions are rendered and taken into account using a sort of graph of all possible vertex positions. Why is this important? Because it allows for online simulations of clothing on a human body, it can make games far cooler than they are now, and you can use the technology to see how materials will perform in various configurations, different weather patterns, and the like. In short, it gives virtual robots real clothes.

Can we expect to see this technology in games any time soon? Not on current consoles.

A common concern about the viability of data-driven techniques focuses on run-time memory footprint. While our approximately 70 MB requirement is likely too large to be practical for games targeting modern console systems (for example, the Xbox 360 has only 512 MB of RAM), we believe its cost is modest in the context of today’s modern PCs (and the coming generation of gaming consoles) which currently have multiple GBs of memory. Furthermore, we have not fully explored that gamut of cloth basis or secondary graph compression strategies and so both better compression, as well as run-time solutions that stream regions of the secondary graph (leaving only the basis representation in core), are likely possible.

It’s quite cute to see how these little figures move inside robes and “casual” clothing and I’d say we’re just a little past the uncanny valley, at least when it comes to clothing. With a few more months of rendering I wonder what they could do with floppy bellies and arm fat?

Original TechCrunch Article

Paraffin-Dipped Brain Cut Into 7,400 Slices Offers Highest Resolution 3-D Brain Imagery Ever (TechnologyReview.com)

(That’ll teach me to edit posts after 2:00 AM. Brain. Brain. Not Brian. *sigh*)

This is intense. Scientists have completed a 3-D image of an entire human brain in breathtaking detail using over 7,400 micro-slices of tissue. I wonder how long it took for them to reassemble all of those slices? That waxy-brain-feeding-tray machine is a bit creepy.

******************************

Human Brain

Scientists have imaged the anatomy of an entire human brain at unprecedented resolution.

A new resource will allow scientists to explore the anatomy of a single brain in three dimensions at far greater detail than before, a possibility its creators hope will guide the quest to map brain activity in humans. The resource, dubbed the BigBrain, was created as part of the European Human Brain Project and is freely available online for scientists to use.

The researchers behind the BigBrain, led by Katrin Amunts at the Research Centre Jülich and the Heinrich Heine University Düsseldorf in Germany, imaged the brain of a healthy deceased 65-year-old woman using MRI and then embedded the brain in paraffin wax and cut it into 7,400 slices, each just 20 micrometers thick. Each slice was mounted on a slide and digitally imaged using a flatbed scanner.

Alan Evans, a professor at the Montreal Neurological Institute at McGill University in Montreal, Canada, and senior author of a paper that reports the results in the journal Science, says his team then took on “the technical challenge of trying to stitch together 7,500 sheets of Saran wrap” into a three-dimensional object using digital image processing. Many slices had small rips, tears, and distortions, so the team manually edited the images to fix major signs of damage and then used an automated program for minor fixes. Guided by previously taken MRI images and relationships between neighboring sections, they then aligned the sections to create a continuous 3-D object representing about a terabyte of data.

Evans says that existing three-dimensional atlases of human brain anatomy are usually limited by the resolution of MRI images—about a millimeter. The BigBrain atlas, in contrast, makes it possible to zoom in to about 20 micrometers in each dimension. That’s not enough to analyze individual brain cells, but it makes it possible to distinguish how layers of cells are organized in the brain.

Joshua Sanes, a neuroscientist at Harvard University, says the project represents one step toward realizing neuroscientists’ aspiration of looking at the human brain “with the sort of cellular resolution [with which] we can look at mouse or fly brains.” But while the atlas is a technical achievement that gives an unprecedented view of an entire brain’s anatomy, it can’t answer questions about brain activity or function, or about the connections between brain cells. The atlas also represents only a single brain, so it doesn’t capture variability between brains.

But Evans says it can be an important resource for future research. One of the larger goals of several brain initiatives worldwide—including the European project and the nascent BRAIN Initiative in the U.S. (see “The Brain Activity Map”)—is to integrate different kinds of data about brain structure and function, he says, and to create computational models of the brain to study processes such as childhood development or neurological diseases. Evans says such work depends on having a clear picture of the brain’s anatomy as a reference, and the BigBrain can serve as a platform on which other information can be mapped. “It’s the mother ship,” he says.

The researchers plan to lead studies integrating the BigBrain with other kinds of data, examining questions such as how genes are expressed and how neurotransmitters are distributed across the brain. They hope to repeat this work in other brains to start to look at how their structures vary.

Original TechnologyReview.com Article

Gamers See More Than Non-Gamers, Study Finds (T3.com)

If you’ve ever struggled to figure out an awesome comeback for a conversation in which gamers are being put down, new research proves that gamers tend to process more information better than non-gamers. Yay us!

******************************

Gamers see more than non-gamers, study finds

Battlefield 4 review

Hours of gaming improves data input and decision making, claim researchers from Duke University

A study has found that people who spend hours playing games are better at taking in information quickly and processing what they see than those that don’t. Previous research has shown that gamers often have better dexterity and are more precise in their hand movements.

Other studies have also found that gamers tend to be better at reacting to stimuli, but this is the first time research has proven a link between playing games and recall.

The study was conducted by Duke University researchers.

“Gamers see the world differently,” said Greg Appelbaum, an assistant professor of psychiatry in the Duke School of Medicine. “They are able to extract more information from a visual scene.”

The study looked at 125 college students, a mix of intensive gamers and non-gamers.

According to ScienceDaily, each student was put through a visual sensory memory task. This flashed a circular arrangement of eight letters for just one-tenth of a second.

After a delay ranging from 13-milliseconds to 2.5 seconds, an arrow appeared pointing to where one of the letters had been.

Participants were asked to identify which letter had been in that spot.

At every time interval, intensive gamers of action video games outperformed those who don’t play games.

The study said that playing action games required gamers to be able to make split second decisions of whether the character in front of them is an enemy or friend.

It said that gamers make “probabilistic inferences” about whether to shoot or not, what direction they are running in and so on, as quickly as they can.

According to Appelbaum, the more someone plays games, the better they get at doing this. “They need less information to arrive at a probabilistic conclusion, and they do it faster.”

However, the gamers as well as the non-gamers suffered from rapid decay in memory of what letters had been seen on the screen. He said that this suggests gamers don’t have better memories than non-gamers; rather they appear to start off with more information to start off with.

According to Appelbaum, the researchers hypothesised that the increased performance could be down to three things. Either they see better, they retain visual memory longer or they’ve improved their decision-making.

He said that based on the rate of memory decay, it is extremely unlikely to be due to improved memory. However, he can’t say for certain whether the improved response is a result of a combination of the two other factors, or just one.

Appelbaum said that the university will be continuing with the study to find out.

Original T3 Article

Laser Time Cloak Disappears Data (PopSci.com)

This is the first almost-practical application of temporal cloaking … but it might work too well.

Temporal Cloaking

(Temporal Cloaking: In the middle of the image, the light intensity goes to zero, creating a cloaking effect. Lukens et al.)

Electrical engineers at Purdue University have found a way to make your data disappear completely–into holes in time. The technique, described in a paper published online in Nature yesterday, uses pulses of light to create “time holes” that allow communication across optic fibers to disappear completely.

The idea of a data cloak, a way to hide the transfer of data in “time pockets,” has existed for a few years, but until now the effect didn’t last long and wasn’t consistent enough to be of any practical use. This is the first temporal cloak that can work quickly enough to hide data streams in telecommunications systems. It can work to hide up to 46 percent of the window of time it takes to transfer data (one of the first temporal cloaking techniques worked less than one percent of the time, according to a Purdue release).

This cloak works using a wave phenomenon called the Talbot effect. Manipulating the timing of light pulses so that the crest of one light wave interacts with the trough of another creates a zero light intensity–where the two signals cancel each other out–in which data can be hidden.

Hidden Data

(Hidden Data: The cloak flattens the wave, making it invisible to an observer.  Lukens et al.)

Right now, author Joseph Lukens, a Purdue graduate student, says the cloaking effect is almost too efficient. “We erased the data-adding event entirely from history, so there’s no way that data could be sent as a useful message to anyone, even a genuine recipient,” he told Nature. Future tweaking might solve this problem to allow super-secret messages to pass through undetected and still make it to the intended recipient. But maybe they can use the current version to destroy embarrassing emails mid-send?

Original Article at Popsci.com

%d bloggers like this: