Filled Under: computer

Posting and Donations

Many ask me, after they register, how can I post here? Simple, let me know you want to be an author. I’ll upgrade your status here so you can post and be seen. Lately the method has been to send to me what they wish to post or a link and I do a copy and paste with references where it came from. Those posts are from the anonymous group. They wish not to be known but want to point out something of interest.
A few have asked me about the lean sometimes. Left, right, middle or WTF? I post what is sent to me. I don’t necessary agree with what I have been sent but in an effort to allow all views, I post it. If you feel you need to correct it, add to it or even better, you got another view, by all means please do so. For the record, I am someone who tends to think, there are a universe of views out there on any subject, issue or thing. If you got a mind that is polarized, shut off to other views and etc… I think I don’t have to explain that more; you are in need to grow. It can be done, just open your mind up and let it flow …
So, to tie that up above, if you got something to put up here, please do so. Register and or contact me. If you think it leans too far to one side, if there is a side … no problem, let’s get it up and let others view it. The site is here for all.

On a different tangent now. This site could use some help in funding. Ads and on to actual donations. Even a few bitcoins. Contact me for the PayPal donation link or other method.
Dale

We know what you’re thinking: Scientists find a way to read minds

We know what you’re thinking: Scientists find a way to read minds

By Maxim Lott

Published March 28, 2014

Think mind reading is science fiction?

Think again.

Scientists have used brain scanners to detect and reconstruct the faces that people are thinking of, according to a a study accepted for publication this month in the journal NeuroImage.

In the study, scientists hooked participants up to an fMRI brain scanner – which determines activity in different parts of the brain by measuring blood flow – and showed them images of faces. Then, using only the brain scans, the scientists were able to create images of the faces the people were looking at.

“It is mind reading,” said Alan S. Cowen, a graduate student at the University of California Berkeley who co-authored the study with professor Marvin M. Chun from Yale and Brice A. Kuhl from New York University.

The study says it is the first to try to reconstruct faces from thoughts. The photos above are the actual photos and reconstructions done in the lab.

While the reconstructions based on 30 brain readings are blurry, they approximate the true images. They got the skin color right in all of them, and 24 out of 30 reconstructions correctly detected the presence or absence of a smile.

The brain readings were worse at determining gender and hair color: About two-thirds of the reconstructions clearly detected the gender, and only half got hair color correct.

“There’s definitely room for improvement,” Cowen said, adding that these experiments were conducted two years ago, though they only recently were accepted for publication. He said he and others have been working on improving the process in the interim.

“I’m applying more sophisticated mathematical models [to the brain scan results], so the results should get better,” he said.

To tease out faces based on brain activity, the scientists showed participants in the study 300 faces while recording their brain activity. Then they showed the participants 30 new faces and used their previously recorded patterns to create 30 images based only on their brain scans.

Once the technology improves, Cowen said, applications could range from better understanding mental disorders, to recording dreams, to solving crimes.

“You can see how people perceive faces depending on different disorders, like autism – and use that to help diagnose therapies,” he said.

That’s because the reconstructions are based not on the actual image, but on how the image is perceived by a subject’s brain. If an autistic person sees a face differently, the difference will show up in the brain scan reconstruction.

Images from dreams are also detectable.

“And you can even imagine,” Cowen said, “way down the road, a witness to a crime might want to come in and reconstruct a suspect’s face.”

How soon could that happen?

“It really depends on advances in brain imaging technology, more so than the mathematical analysis. It could be 10, 20 years away.”

One challenge is that different brains show different activity for the same image. The blurry images pictured here are actually averages of the thoughts of six lab volunteers. If one were to look at any individual’s reading, the image would be less consistent.

“There’s a wide variation in how people’s brains work under a scanner – some people have better brains for fMRI – and so if you were to pick a participant at random it might be that their reconstructions are really good, or it might be that their reconstructions are really poor, which is why we averaged across all the participants,” Cowen said.

For now, he added, you shouldn’t worry about others snooping on your memories or forcibly extracting information.

“This sort of technology can only read active parts of the brain. So you couldn’t read passive memories – you would have to get the person to imagine the memory to read it,” Cowen said.

“It’s a matter of time, and eventually – maybe 200 years from now – we’ll have some way of reading inactive parts of the brain. But that’s a much harder problem, as it involves measuring very fine details of brain structure that we don’t even really understand.”

The author of this piece, Maxim Lott, can be reached on twitter at @maximlott or at maxim.lott@foxnews.com
URL http://www.foxnews.com/science/2014/03/28/know-what-youre-thinking-scientists-find-way-to-read-minds/

Hackers plan to offer blueprint for taking over Prius, Escape

Hackers plan to offer blueprint for taking over Prius, Escape

Published July 28, 2013 FoxNews.com

http://www.foxnews.com/tech/2013/07/28/hackers-plan-to-offer-blueprint-for-taking-over-prius-escape/?intcmp=obnetwork

Mechanics work on Prius at a newly completed Toyota's service center in Tajimi, central Japan, Monday, July 22, 2013. Toyota is opening a training facility for mechanics complete with a test course that simulates 13 driving conditions including cobblestones and bumpy roads as part of the automaker's efforts to avoid a repeat of its recall fiasco. (AP Photo/Yuri Kageyama) (AP2013)

editor note: Same trick the cylons used to render ships inop.

Two well-known computer software hackers plan to publicly release this week a veritable how-to guide for driving two widely owned automobiles haywire.

According to Reuters, Charlie Miller and Chris Valasek will release the findings — as well as related software — at the Def Con hacking convention in Las Vegas, showing how to manipulate a Toyota Prius and Ford Escape.

The research, conducted with the aid of a grant from the U.S. government, can alternately force a Prius to brake at 80 mph, veer quickly and dramatically, or accelerate, all without the driver’s prompting.

The two hackers have also reportedly figured out a way to disable a Ford Escape’s brakes while the vehicle is traveling at “very low speeds,” no matter how hard the driver attempts to stop.

In both cases, the would-be hacker would have to be inside the car in order to tamper with its computer, according to Reuters.

“Imagine what would happen if you were near a crowd,” said Valasek, a software consultant who claims his – and Miller’s – research exposes weaknesses in automobile security systems so patches can be applied and criminals thwarted.

Miller and Valasek told Reuters they hope their 100-page white paper will encourage other hackers to uncover additional automobile security flaws before they can be potentially exposed by malicious parties.

“I trust the eyes of 100 security researchers more than the eyes that are in Ford and Toyota,” Miller, a Twitter security engineer, told Reuters.

A Toyota Motor Corp. spokesman said the company was reviewing Miller and Valasek’s work.

“It’s entirely possible to do,” John Hanson reportedly said of the potentials hacks. “Absolutely, we take it seriously.”

Meanwhile, Craig Daitch, a Ford Motor Corp. spokesman, added, “This particular attack was not performed remotely over the air, but as a highly aggressive direct physical manipulation of one vehicle over an elongated period of time, which would not be a risk to customers and any mass level.”

Scientists create levitation system with sound waves

Scientists create levitation system with sound waves

By Tia Ghose
Published July 16, 2013
LiveScience

A new technique uses sound waves to levitate objects and move them in the mid-air

Hold on to your wand, Harry Potter: Science has outdone even your best “Leviosa!” levitation spell.

Researchers report that they have levitated objects with sound waves, and moved those objects around in midair, according to a new study.

Scientists have used sound waves to suspend objects in midair for decades, but the new method, described Monday, July 15, in the journal Proceedings of the National Academy of Sciences, goes a step further by allowing people to manipulate suspended objects without touching them.

‘If you have some dogs around, they are not going to like it at all.’

– Daniele Foresti, a mechanical engineer at the ETH Zürich in Switzerland

This levitation technique could help create ultrapure chemical mixtures, without contamination, which could be useful for making stem cells or other biological materials.

Parlor trick
For more than a century, scientists have proposed the idea of using the pressure of sound waves to make objects float in the air. As sound waves travel, they produce changes in the air pressure — squishing some air molecules together and pushing others apart.

By placing an object at a certain point within a sound wave, it’s possible to perfectly counteract the force of gravity with the force exerted by the sound wave, allowing an object to float in that spot.

In previous work on levitation systems, researchers had used transducers to produce sound waves, and reflectors to reflect the waves back, thus creating standing waves.

“A standing wave is like when you pluck the string of a guitar,” said study co-author Daniele Foresti, a mechanical engineer at the ETH Zürich in Switzerland. “The string is moving up and down, but there are two points where it’s fixed.”

Using these standing waves, scientists levitated mice and small drops of liquid.

But then, the research got stuck.

Acoustic levitation seemed to be more of a parlor trick than a useful tool: It was only powerful enough to levitate relatively small objects; it couldn’t levitate liquids without splitting them apart, and the objects couldn’t be moved.

Levitating liquids
Foresti and his colleagues designed tiny transducers powerful enough to levitate objects but small enough to be packed closely together.

By slowly turning off one transducer just as its neighbor is ramping up, the new method creates a moving sweet spot for levitation, enabling the scientists to move an object in midair. Long, skinny objects can also be levitated.

The new system can lift heavy objects, and also provides enough control so that liquids can be mixed together without splitting into many tiny droplets, Foresti said. Everything can be controlled automatically.

The system blasts sounds waves at what would be an ear-splitting noise level of 160 decibels, about as loud as a jet taking off. Fortunately, the sound waves in the experiment operated at 24 kilohertz, just above the normal hearing range for humans.

However, “if you have some dogs around, they are not going to like it at all,” Foresti told LiveScience.

Right now, the objects can only be moved along in one dimension, but the researchers hope to develop a system that can move objects in two dimensions, Foresti said.

Major advance
The new system is a major advance, both theoretically and in terms of its practical applications, said Yiannis Ventikos, a fluids researcher at the University College London who was not involved in the study.

The new method could be an alternative to using a pipette to mix fluids in instances when contamination is an issue, he added. For instance, acoustic levitation could enable researchers to marinate stem cells in certain precise chemical mixtures, without worrying about contamination from the pipette or the well tray used.

“The level of control you get is quite astounding,” Ventikos said.

Read more: http://www.foxnews.com/science/2013/07/16/scientists-create-levitation-system-with-sound-waves/?intcmp=obinsite#ixzz2ZGb21ptL

Meet WorldKit, the projector that turns everything into a touchscreen

Meet WorldKit, the projector that turns everything into a touchscreen
By Mika Turim-Nygren — July 5, 2013
http://www.digitaltrends.com/computing/meet-worldkit-the-projector-that-turns-everything-into-a-touchscreen/

demo of screen on flat surface

Note, plenty of demo images at main site, references left for you.

When it comes to technological innovation, there are two basic approaches. You can start big, flashy, and expensive, and hope that eventually your tech invention comes down enough in price for an average user to afford – think of GPS devices, for instance, which were the realm of high-budget military agencies long before ordinary civilians could dream of buying one; or, you can set out from the beginning to design something life-changing that everyone can have access to, rather than just an elite few.

The goal is to transform all of your surroundings into touchscreens, equipping walls, tables, and couches with interactive, intuitive controls.
The research team behind WorldKit, a new, experimental technology system, is trying to straddle the gulf between these two extremes. The goal is to transform all of your surroundings into touchscreens, equipping walls, tables, and couches with interactive, intuitive controls. But the team wants to do so without installing oversized iPads into every surface in your home, which could easily run up a six-figure price tag.

So how does the magic happen? With a simple projector – a projector paired with a depth sensor, to be precise. “It’s this interesting space of having projected interfaces on the environment, using your whole world as a sort of gigantic tablet,” said Chris Harrison, a soon-to-be professor in human-computer interaction at Carnegie Mellon University. Robert Xiao, a PhD candidate at Carnegie Mellon and lead researcher on the project, explained that WorldKit uses a depth camera to sense where flat surfaces are in your environment. “We allow a user to basically select a surface on which they can ‘paint’ an interactive object, like a button or sensor,” Xiao said.

We recently chatted with both Harrison and Xiao about their work on the WorldKit project, and learned just how far their imaginations run when it comes to the future of touch technology and ubiquitous computing. Below, we talk about merging the digital and the physical worlds, as well as creative applications for WorldKit that involve really thinking outside the box (or outside the monitor, in this case).
Understanding WorldKit’s workings

We know; the concept of a touchscreen on any surface is a little far out there, so let’s break it down. WorldKit works by pairing a depth-sensing camera lens, such as the one that the Kinect uses, with a projector lens. Then, programmers write short scripts on a MacBook Pro using Java, similar to those they might write for an Arduino, to tell the depth camera how to react when someone makes certain gestures in front of it. The depth camera interprets the gestures and then tells the projector to react by projecting certain interfaces. For instance, if someone makes a circular gesture, the system can interpret that by projecting a dial where the gesture was made. Then, when someone “adjusts” the dial by gesturing in front of it, the system can adjust a volume control elsewhere.

RobertXiao

ChrisHarrison-vertical

WorldKit-Projector-vertical

The brilliance – and the potential frustration – of this system lies in its nearly endless possibilities. Currently, whatever you want WorldKit to do, you must program it to do yourself. Xiao and Harrison expressed hope that one day, once WorldKit reaches the consumer realm, there might be an online forum where people can upload and download programming scripts (much like apps) in order to make their WorldKit system perform certain tasks. However, at the moment, WorldKit remains in an R&D phase in the academic realm, allowing its creators to dream big about what they would like to make it do eventually.

In any case, the easiest way to understand how WorldKit works is to watch a demo video of it in action. In the video, researchers touch various surfaces to “paint” them with light from the projector. Afterward, the WorldKit system uses the selected area to display a chosen interface, such as a menu bar or a sliding lighting-control dial, which can then be manipulated through touch gestures.
WorldKit Demo

Robert Xiao demonstrates how to use WorldKit to create a radial dial interface on any available flat surface – in this case, a table

Currently, WorldKit’s depth sensor is nothing other than a Kinect – the same one that shipped with the Xbox 360 – that connects to a projector that’s mounted to a ceiling or tripod. While this combo is already sensitive enough to track individual fingers and multi-directional gestures down to the centimeter, it does have one major drawback: size. “Certainly the system as it is right now is kind of big, and we all admit that,” Xiao said.
Lights, user, action: Putting WorldKit to use

But the team has high hopes for the technology on the near horizon. “We’re already seeing cell phones on the market that have projectors built in,” Xiao said. “Maybe the back camera, one day, is a depth sensor … You could have WorldKit on your phone.” Harrison added that WorldKit could allow users to take full advantage of their phones for the first time. “A lot of smartphones you have nowadays are easily powerful enough to be a laptop, they just don’t have screens big enough to do it,” Harrison said. “So with WorldKit, you could have one of these phones be your laptop, and it would just project your desktop onto your actual desk.”

With projection, you can do some very clever things that basically alter the world in terms of aesthetics.
If Harrison and Xiao can imagine the mobile version of WorldKit on a smartphone in five years’ time, they have an even crazier vision for 10 or 15 years down the line. “We could actually put the entire WorldKit setup into something about the size of a lightbulb,” Xiao said. For these researchers, a lightbulb packed full of WorldKit potential has truly revolutionary implications. “We’re looking at that as almost as big as the lighting revolution of the early 1800s,” Xiao added.

The possibilities for WorldKit, as you might imagine, are limitless. So far, Harrison and Xiao’s ideas have included an away-from-office status button – the virtual version of a post-it note – and a set of digital TV controls. “You won’t ever have to find your remote again,” Xiao said.

The team’s already envisioning much more ambitious applications, such as experimental interior design. According to Harrison, you could make your own wallpaper, or change the look of your couch. “With projection, you can do some very clever things that basically alter the world in terms of aesthetics,” Harrison said. “Instead of mood lighting, you could have mood interaction.”

The miniature version of WorldKit, shown here, uses a tiny depth camera called the CamBoard Nano by PMD.
The miniature version of WorldKit, shown here, uses a tiny depth camera called the CamBoard Nano by PMD.

The CamBoard Nano depth camera pairs with a PicoP projector by Microvision.
The CamBoard Nano depth camera pairs with a PicoP projector by Microvision.

Xiao, meanwhile, fantasized about the system’s gaming potential. “You could augment the floor so that you didn’t want to step on it, and then play a lava game,” he said, describing a game where you have to cross from one end of the floor to the other, using only the tables and chairs. “You can imagine this being a very exciting gaming platform if you want to do something physical, instead of just using a controller.”
Blurring the boundaries between digital and physical

Xiao has good reason to be enthusiastic. He believes WorldKit gets at the heart at one of the biggest goals of computing research. “Eventually we’d like to see computers sort of fade into the background, and just become the way you do things,” he said. “Right now, it’s very explicit whenever you’re operating a computer that you are interacting with a computer.”
WorldKit: TV controls

Robert Xiao demonstrates how a single WorldKit system can create various interfaces on multiple surfaces at once – in this case, a drop-down menu and volume and lighting controls for watching a movie.

Indeed, part of what makes WorldKit so exciting is that it incorporates real, physical materials into its virtual play. But Harrison is more hesitant to claim that this is always a good thing, especially when it comes to broad, philosophical questions about aesthetics. “In art, there’s a lot that’s nice about having it be rich, and physical, and also enduring,” Harrison argued, talking about digitally “painting” a surface using WorldKit. “So when you go over to the digital domain, are we using some of the things that make art a fundamental part of the human experience? Or are we losing something?”
Google Glass and WorldKit: Seeing vs. touching

There is one realm in which Harrison seems certain that WorldKit’s unique blend of physical and digital properties are at an advantage, and that’s in contrast to Google Glass. While both approaches attempt to augment reality through embedded computing, Harrison believes that Google Glass’s reliance on virtual gestures falls a bit flat.

The problem with clicking virtual buttons in the air is that’s not really something that humans do…
“The problem with clicking virtual buttons in the air is that’s not really something that humans do,” Harrison said. “We work from tables, we work on walls … that’s something we do on a daily basis … we don’t really claw at the air all that often.” To really understand what he means, just remember when Bluetooth first came out. Not only did everyone look crazy talking to themselves on street corners, it was hard not to feel self-conscious starting a conversation into empty air without the physical phone as a prop.

Xiao agreed, emphasizing that WorldKit is able to promote instinctual, unforced interaction by relying on physical objects. “One of the advantages of WorldKit is that all the interactions are out in the world, so you are interacting with something very real and very tangible,” Xiao said. “People are much more willing, much more able, to interact with it in a fluid and natural way.” In this case, perhaps touching – rather than seeing – means believing.
A ray of light: looking into the future

Like true academics, Xiao and Harrison agreed on one of the future applications they would most like to see from WorldKit in the days to come: “A digital whiteboard,” they chimed simultaneously. Why? Unlike a traditional board, a digital whiteboard would allow computerized collaboration in real-time.

Indeed, Xiao and Harrison are no strangers to collaboration – they strongly encourage crowdsourcing of their new technology. Instead of wanting to protect and commercialize WorldKit at this point, they would rather see it developed to its full potential. They are in the process of releasing WorldKit’s source code, and after attending the CHI 2013 Conference on Human Factors in Computing Systems, the “premier international conference on human-computer interaction” held in Paris last April, they’re hoping to get some of the 3,600 other attendees and researchers tinkering with the system soon.

WorldKit-Demo-4

WorldKit-Demo

WorldKit-Demo-2

“We’re primarily engineers,” Harrison said. “There are a lot of designers and application builders out there that I’m sure are going to have crazy awesome ideas of what to do with this, [and] just the two of us cannot possibly explore that entire space.”

Even now, researchers in other fields have already started applying WorldKit in ways Xiao and Harrison might never have anticipated. Harrison and Xiao are actually collaborating on a study at the moment with the Human Engineering Research Labs over in Pittsburgh. “They’re primarily concerned with people with cognitive disabilities,” Xiao said. “These are people who may need extra instructions for doing things.”

In the study, cognitively disabled participants are asked to follow a recipe to cook a dish. To help them, WorldKit projects descriptions of the necessary ingredients onto the kitchen table, such as three tomatoes or a cup of water, and doesn’t move on to the next step of the recipe until all the ingredients are physically in place on the table. Essentially, Xiao argued, WorldKit can act as a kind of prosthetic to help the cognitively disabled navigate through daily tasks in their environment.

Ultimately, whether we’re talking about an interactive whiteboard or a digital cooking assistant, the goal of WorldKit is the same: using embedded computing to make the interactions between people and computers as seamless, natural, and effortless as possible. Once that happens – once we are actually able to take advantage of computing everywhere without ever touching a computer – all of our lives have the potential to get better.