lunes, 12 de diciembre de 2016

Researchers uncover how hippocampus influences future thinking


Researchers from Boston University School of Medicine have determined the role of the hippocampus in future imaging in the process of constructing a scene in one's mind.
Credit: © memo / Fotolia


Source:
Boston University Medical Center

Summary:

Over the past decade, researchers have learned that the hippocampus -- historically known for its role in forming memories -- is involved in much more than just remembering the past; it plays an important role in imagining events in the future.
Yet, scientists still do not know precisely how the hippocampus contributes to episodic imagining -- until now.
Researchers have determined the role of the hippocampus in future imaging lies in the process of constructing a scene in one's mind

Over the past decade, researchers have learned that the hippocampus -- historically known for its role in forming memories -- is involved in much more than just remembering the past; it plays an important role in imagining events in the future.

Yet, scientists still do not know precisely how the hippocampus contributes to episodic imagining -- until now. Researchers from Boston University School of Medicine (BUSM) have determined the role of the hippocampus in future imaging lies in the process of constructing a scene in one's mind.

The findings, which appear in the journal Cerebral Cortex, shed important light on how the brain supports the capacity to imagine the future and pinpoints the brain regions that provide the critical ingredients for performing this feat.

The hippocampus is affected by many neurological conditions and diseases and it also can be compromised during normal aging. Future thinking is a cognitive ability that is relevant to all humans. It is needed to plan for what lies ahead, whether to navigate daily life or to make decisions for major milestones further in the future.

Using functional Magnetic Resonance Imaging, BUSM researchers performed brain scans on healthy adults while they were imagining events.

They then compared brain activity in the hippocampus when participants answered questions pertaining to the present or the future.

After that, they compared brain activity when participants answered questions about the future that did or did not require imagining a scene.

"We observed no differences in hippocampal activity when we compared present versus future imaging, but we did observe stronger activity in the hippocampus when participants imagined a scene compared to when they did not, suggesting a role for the hippocampus in scene construction but not mental time travel," explained corresponding author Daniela Palombo, PhD, postdoctoral fellow in the memory Disorders Research Center at BUSM and at the VA Boston Healthcare System.

According to the researchers the importance of studying how the hippocampus contributes to cognitive abilities is bolstered by the ubiquity of hippocampal involvement in many conditions.

"These findings help provide better understanding of the role of the hippocampus in future thinking in the normal brain, and may eventually help us better understand the nature of cognitive loss in individuals with compromised hippocampal function," she added.

Palombo believes that once knowledge about which aspects of future imagining are and are not dependent on the hippocampus, targeted rehabilitation strategies can be designed that exploit those functions that survive hippocampal dysfunction and may provide alternate routes to engage in future thinking.

sciencedaily.com/


martes, 7 de junio de 2016

Google has developed a 'big red button' that can be used to interrupt artificial intelligence and stop it from causing harm

Stuart Armstrong

The Future of Humanity Institute, University of Oxford

Stuart Armstrong is a philosopher at the University of Oxford and one of the paper's authors.

Machines are becoming more intelligent every year thanks to advances being made by companies like Google, Facebook, Microsoft, and many others.

AI agents, as they're sometimes known, can already beat us at complex board games like Go, and they're becoming more competent in a range of other areas.

Now a London artificial-intelligence research lab owned by Google has carried out a study to make sure that we can pull the plug on self-learning machines when we want to.

DeepMind, bought by Google for a reported 400 million pounds — about $580 million — in 2014, teamed up with scientists at the University of Oxford to find a way to make sure that AI agents don't learn to prevent, or seek to prevent, humans from taking control.

The paper — "Safely Interruptible Agents PDF," published on the website of the Machine Intelligence Research Institute (MIRI) — was written by Laurent Orseau, a research scientist at Google DeepMind, Stuart Armstrong at Oxford University's Future of Humanity Institute, and several others.

The researchers explain in the paper's abstract that AI agents are "unlikely to behave optimally all the time." They add:

If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation.

The researchers, who weren't immediately available for interviewing, claim to have created a "framework" that allows a "human operator" to repeatedly and safely interrupt an AI, while making sure that the AI doesn't learn how to prevent or induce the interruptions.

The authors write:

Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.

The researchers found that some algorithms, such as "Q-learning" ones, are already safely interruptible, while others, like "Sarsa," aren't when they're off the shelf, but they can be modified relatively easily so they are.

"It is unclear if all algorithms can be easily made safely interruptible," the authors admit.

Nick Bostrom

srf
University of Oxford philosopher Nick Bostrom

DeepMind's work with the Future of Humanity Institute is interesting: DeepMind wants to "solve intelligence" and create general purpose AIs, while the Future of Humanity Institute is researching potential threats to our existence.

The institute is led by Nick Bostrom, who believes that machines will outsmart humans within the next 100 years and thinks that they have the potential to turn against us.

Speaking at Oxford University in May 2015 at the annual Silicon Valley Comes to Oxford event, Bostrom said:

I personally believe that once human equivalence is reached, it will not be long before machines become superintelligent after that. It might take a long time to get to human level but I think the step from there to superintelligence might be very quick.

I think these machines with superintelligence might be extremely powerful, for the same basic reasons that we humans are very powerful relative to other animals on this planet.

It's not because our muscles are stronger or our teeth are sharper, it's because our brains are better.

DeepMind knows the technology that it's creating has the potential to cause harm.

The founders — Demis Hassabis, Mustafa Suleyman, and Shane Legg — allowed their company to be bought by Google on the condition that the search giant created an AI ethics board to monitor advances that Google makes in the field.

Who sits on this board and what they do, exactly, remains a mystery.

The founders have also attended and spoken at several conferences about ethics in AI, highlighting that they want to ensure the technology they and others are developing is used for good, not evil.

It's likely that they will look to incorporate some of the findings from the "Safely Interruptible Agents" paper into their work going forward.

Sam Shead


domingo, 21 de junio de 2015

How good can computers get at predicting events?



 2012, when Cuba suffered its first outbreak of cholera in 130 years, the government and medical experts there were shocked. But software created by Kira Radinsky had predicted it months earlier.

Radinsky’s software had essentially read 150 years of news reports and huge amounts of data from sources such as Wikipedia, and spotted a pattern in poor countries: floods that occurred about a year after a drought in the same area often led to cholera outbreaks.

The predictions made by Radinsky’s software are about as accurate as those made by humans.

That digital prognostication ability would be extremely useful in automating many kinds of services.

Radinsky was born in Ukraine and immigrated to Israel with her parents as a preschooler.

She developed the software with Eric Horvitz, co-director at Microsoft Research in Redmond, Washington, where she spent three months as an intern while studying for her PhD at the Technion-Israel Insitute of Technology.

Radinsky then started SalesPredict, to advise salespeople on how to identify and handle promising leads.

“My true passion,” she says, “is arming humanity with scientific capabilities to automatically anticipate, and ultimately affect, future outcomes based on lessons from the past.”

—Matthew Kalman

technologyreview.com


domingo, 14 de abril de 2013

Has An Iranian Scientist Named Ali Razeghi Invented A 'Time Machine'?




An Iranian inventor recently claimed he created a "time machine," according to reports. But the Internet is skeptical, and with good reason.
The Telegraph caused a stir Wednesday with a story about a young Tehran-based scientist, Ali Razeghi, and an invention he calls "The Aryayek Time Traveling Machine." 
Reportedly something of a mad scientist, Razeghi claimed the device, which "easily fits into the size of a personal computer case," can predict with 98-percent accuracy the future five to eight years of an individual's life.
The Telegraph cited an earlier story, in Farsi, by Iranian news agency Fars
However, The Washington Post reports that Fars quietly deleted the story, even as it began to go viral among Western media outlets. (Fars' link is now dead.) 
The Atlantic Wire points out that the story never even made it to the Science section on the site's English-language side.
A separate interview with Razeghi was published in Farsi by Iranian news site Entekhab. 
The story says Razeghi is a supervisor at Iran's Center for Strategic Inventions and Inventors and claims that his baffling invention won't be available for another few years, at least. 
"We're waiting for conditions to improve in Iran," Razeghi told the outlet, according to a translation by The Huffington Post.
Razeghi was coy during the interview, refusing to give out many details because he was worried his idea would be stolen and reproduced by China. 
He did say, however, that his device incorporates both hardware and software components, and that it cost roughly 500,000 Iranian tomans (about $400). 
When asked whether he was worried the machine might cause problems, he said he envisions it used selectively, to tell a couple the future sex of their child, for example.
Neither Iran nor Razeghi have publicly responded to the report.
Radio Free Europe writes that "most Iran watchers will be treating his announcement with a certain amount of skepticism," in light of a recent flap that involved a Photoshopped picture of Iran's Qaher-313 stealth fighter jet.
Scientists around the world have made previous claims (some dubious, some less so) about their own "time machine" inventions. 
In 2009, a man named Steve Gibbs, of Clearwater, Neb., said he had invented a "hyperdimensional resonator," which he claimed could be used for "out-of-body time travel," according to the Examiner.
More recently, in 2011, physicists from Cornell University in Ithaca, N.Y., announced they had developed a "time cloak" that they say can hide events for trillionths of a second.

huffingtonpost.com

sábado, 12 de mayo de 2012

Imagining the Future Invokes Your Memory


I remember my retirement like it was yesterday. 
As I recall, I am still working, though not as hard as I did when I was younger. 
My wife and I still live in the city, where we bicycle a fair amount and stay fit. We have a favorite coffee shop where we read the morning papers and say hello to the other regulars. We don’t play golf.
In reality, I’m not even close to retirement. 
This is just a scenario I must have spun out at some point in the past. 
There are other future scenarios, but the details aren’t all that important. Notably, all of my futures have a peaceful and contented feel to them. 
They don’t include any financial or health problems, nor do they include boredom—not for me or anyone else I know.
A new study from the January issue of Psychological Science may explain why we are all so optimistic about what’s to come.

The authors report that people tend to remember imagined future scenarios that are happy better than they recall the unhappy ones.
Cognitive scientists are very interested in people’s “remembered futures.” 
The whole idea seems contradictory in a way, as we tend to think of memory in connection with the past—recollections of people and things gone by. 
The fact is that we all imagine the future, and from time to time we recall those imaginary scenarios. 
Recent research has shown that the same brain areas are active when we remember past events and when we think about the future. 
Indeed, some scientists believe that these “memories” are highly adaptive, allowing us to plan and better prepare ourselves for whatever lies in store.

If we can remember the actions and reactions we thought about in the past, our future behavior will be more efficient.
Still, very little was known until recently about how these simulations work. 
Are all future memories equally beneficial? Which scenarios do we recall best? 
Are most people’s forecasts as rosy as mine? Or do we also spin out less optimistic simulations of the years to come, ones that we tend to forget over time?
These are very difficult questions to study in a laboratory—or at least they were until now. 
A team of psychological scientists, headed by Karl K. Szpunar of Harvard University, devised a novel method for generating authentic future simulations, which he then used to study their characteristics and staying power.
Recalling Tomorrow
Szpunar and his colleagues began by collecting a lot of biographical detail from volunteers’ actual memories. 
This information included people they had known, places they had been and the ordinary things surrounding them. 
I might, for example, tell the researchers about having a beer with my cousin Karen at a bar in Baltimore; buying a television at Best Buy with my wife; and borrowing $10 from my college roommate Roger at the bookstore. 
Szpunar’s team asked for more than 100 of these specific event memories from each of the 48 volunteers in their study.
A week later the researchers took each person’s raw material—all those people, places and things from near and distant pasts—and jumbled it all together. 
They presented the students with random combinations and instructed them to generate imaginary future scenarios for each one. 
For me the random set might have been my roommate Roger, the Baltimore bar and the TV. 
Sometimes the volunteers were instructed to imagine a positive future, sometimes a negative one and others times neutral. 
So I might envisage Roger and me having a terrific time cheering on the Orioles at that Baltimore bar, or I could imagine the two of us falling into a bitter argument at the same bar, while the news played on the TV in the background.
Later, the researchers tested the volunteers’ memories of these future scenarios by giving them two of the three details—the bar and Roger, say—and asking them to fill in the missing detail (the TV, in this case) to re-create the simulated future scene. 
The scientists tested some of the volunteers 10 minutes after they had generated the imaginary future scenarios, and they tested others a day later. 
The idea was to see if the emotional content of the imagined futures—positive, negative or neutral—made them more or less memorable.
Wray Herbert