Eta Compute Debuts Spiking Neural Network Chip for Edge AI

Chip can learn on its own and inference at 100-microwatt scale company says at Arm Tech Con

3D Print Real Tools, Not Toys

Register to receive two free Markforged sample parts—one printed with standard thermoplastic, and one reinforced with continuous carbon fiber.

Easy, Effective Pumpkin Carving Tricks

Simple but eye-catching pumpkin carving effects that you can do even if you don’t think of yourself as particularly crafty.

Read more on MAKE

The post Easy, Effective Pumpkin Carving Tricks appeared first on Make: DIY Projects and Ideas for Makers.

DeepMind’s New Research Plan to Make Sure AI Is Safe

Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.

AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.

That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.

In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.

A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.

Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.

The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.

Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.

Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.

And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.

The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.

The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.

Image Credit: cono0430 / Shutterstock.com

Artificial intelligence aids automatic monitoring of single molecules in cells

Researchers developed a system that can automatically image single molecules within living cells. This system employs learning via neural networks to focus appropriately on samples, search automatically for cells, image fluorescently labeled single molecules, and track their movements. With this system, the team achieved the automated determination of pharmacological parameters and quantitative characterization of the effects of ligands and inhibitors on a target, which has potentially profound implications for biological and medical sciences.

Leading Transformation in a World of Uncertainty

Whether creating a disruptive business model, developing a radical innovation, or executing a cultural makeover, leaders know that their job is to drive organizational transformation to keep pace with today’s rapidly changing world.

They also know that the odds of success are against them.

Often, however, it is not the inability to solve technological or strategic problems that causes companies to fail. It is the human problems associated with change, such as fear, habits, politics, and lack of imagination, that frequently block efforts to innovate.

But if we could understand the psychological and cognitive biases that stymie innovation, couldn’t we do a better job at overcoming these barriers?

In our book, Leading Transformation: How to Take Charge of Your Company’s Future, we outline a new process rooted in an emerging field called behavioral transformation, which focuses on understanding how innovation and transformation actually happen in organizations.

The book is divided into three steps to help overcome the behavioral limitations that stymie innovation most: envisioning the future, breaking down resistance, and navigating unknown territory. The first step, envisioning the future, is necessary for all transformational efforts, and it is also one of the hardest. But with the right tools, it doesn’t have to be.

Innovation’s Antibody: Incremental Thinking

One of the biggest limitations to organizational innovation and transformation is the human tendency towards narrow thinking, or seeing only incremental improvements to the status quo.

In contrast, we admire innovators like Elon Musk and Jeff Bezos precisely because they dream bigger, dare bigger, and inspire those around them to change the world.

When we interviewed Musk and his team at Tesla, what surprised us most was how Musk’s vision to change the world to a renewable, electric vehicle future had infected everyone at the company. Engineers, assembly-line workers, and even custodians felt that they were changing the world through their work, not just making cars. One leader told us confidentially, “We don’t have the best engineers in the world, but they believe in what we are doing so much, we can do amazing things with them.”

The question then becomes, how can other leaders achieve a similar type of impact and help their organizations break free of incremental thinking? We suggest using strategic narratives and science fiction as tools for radically re-envisioning what is possible in your organization.

Using Science Fiction and Strategic Narrative to Envision the Future

For centuries, stories have opened our eyes to what is possible, suspended our disbelief, and stirred our hearts into action. Stories are one of humankind’s oldest and most powerful tools.

The power of story has its roots in evolutionary psychology: recent neuroscience research reveals that stories release a rush of neurochemicals that can literally sync people’s brains with one another and motivate action. When used properly, a story can help people see the future and transform them from adversaries into advocates working hard to create that future together.

As a transformational tool, science fiction and strategic narratives inspire us for several reasons.

First, they encourage us to imagine—even demand that we imagine—a different but possible future. Second, good science fiction takes into account the human elements of technology and change and wrestles with their implications. Thus, the good stories are less about the technology and more about the human problems that technology reveals or solves.

The resulting story in a strategic narrative involves a protagonist, a dilemma, and a resolution, all built into a narrative arc that gives us reason to believe. What matters most is finding a way of storytelling that overcomes your audience’s natural resistance to change and to thinking bigger about the future. In the book, we outline how to create a strategic narrative and then how to choose the right medium to tell your story, for example, by creating a comic book.

Together these elements—the abilities both to see further and to ask what problems can be solved—can help organizations and leaders break free of the biases that trap them in incremental thinking and instead open up to envisioning valuable new futures.

To create the future we desire in today’s complex and uncertain business climate, we need new tools and approaches to help leaders overcome the incrementalism, biases, and fears that hold back positive change and transformation.

Ultimately, transformation is about learning to envision the future you want to create, and then finding the right tools to help you take charge of the future for your organization.

Image Credit: gst / Shutterstock.com

Army researchers develop novel technique to locate robots and soldiers in GPS-challenged environments

Scientists at the U.S. Army Research Laboratory have developed a novel algorithm that enables localization of humans and robots in areas where GPS is unavailable. According to ARL researchers Gunjan Verma and Dr. Fikadu Dagefu, the Army needs to be able to localize agents operating in physically complex, unknown and infrastructure-poor environments. “This capability is…

The post Army researchers develop novel technique to locate robots and soldiers in GPS-challenged environments appeared first on The Robot Report.

Weekend Watch: Wood Art and Caricatures from Cammie’s Garage

Artist and maker Cameron Porter explores and documents his passion for making from his garage.

Read more on MAKE

The post Weekend Watch: Wood Art and Caricatures from Cammie’s Garage appeared first on Make: DIY Projects and Ideas for Makers.

This Robotic Warehouse Fills Orders in Five Minutes, and Fits in City Centers

Shopping is becoming less and less of a consumer experience—or, for many, less of a chore—as the list of things that can be bought online and delivered to our homes grows to include, well, almost anything you can think of. An Israeli startup is working to make shopping and deliveries even faster and cheaper—and they’re succeeding.

Last week, CommonSense Robotics announced the launch of its first autonomous micro-fulfillment center in Tel Aviv. The company claims the facility is the smallest of its type in the world at 6,000 square feet. For comparison’s sake—most fulfillment hubs that incorporate robotics are at least 120,000 square feet. Amazon’s upcoming facility in Bessemer, Alabama will be a massive 855,000 square feet.

The thing about a building whose square footage is in the hundred-thousands is, you can fit a lot of stuff inside it, but there aren’t many places you can fit the building itself, especially not in major urban areas. So most fulfillment centers are outside cities, which means more time and more money to get your Moroccan oil shampoo, or your vegetable garden starter kit, or your 100-pack of organic protein bars from that fulfillment center to your front door.

CommonSense Robotics built the Tel Aviv center in an area that was previously thought too small for warehouse infrastructure. “In order to fit our site into small, tight urban spaces, we’ve designed every single element of it to optimize for space efficiency,” said Avital Sterngold, VP of operations. Using a robotic sorting system that includes hundreds of robots, plus AI software that assigns them specific tasks, the facility can prepare orders in less than five minutes end-to-end.

It’s not all automated, though—there’s still some human labor in the mix. The robots fetch goods and bring them to a team of people, who then pack the individual orders.

CommonSense raised $20 million this year in a funding round led by Palo Alto-based Playground Global. The company hopes to expand its operations to the US and UK in 2019. Its business model is to charge retailers a fee for each order fulfilled, while maintaining ownership and operation of the fulfillment centers. The first retailers to jump on the bandwagon were Super-Pharm, a drugstore chain, and Rami Levy, a retail supermarket chain.

“Staying competitive in today’s market is anchored by delivering orders quickly and determining how to fulfill and deliver orders efficiently, which are always the most complex aspects of any ecommerce operation. With robotics, we will be able to fulfill and deliver orders in under one hour, all while saving costs on said fulfillment and delivery,” said Super-Pharm VP Yossi Cohen. “Before CommonSense Robotics, we offered our customers next-day home delivery. With this partnership, we are now able to offer our customers same-day delivery and will very soon be offering them one-hour delivery.”

Long live the instant gratification economy—and the increasingly sophisticated technology that’s enabling it.

Image Credit: SasinTipchai / Shutterstock.com

This Week’s Awesome Stories From Around the Web (Through October 13)

ROBOTICS

Boston Dynamics’ Atlas Robot Shows Off Parkour Skills
Erico Guizzo | IEEE Spectrum
“The remarkable evolution of Atlas, Boston Dynamics’ most agile robot, continues. In a video posted today, Atlas is seen jumping over a log and leaping up steps like a parkour runner. The robot has come a long way.”

ARTIFICIAL INTELLIGENCE

Google’s AI Bots Invent Ridiculous New Legs to Scamper Through Obstacle Course
George Dvorsky | Gizmodo
“Sure, many of the solutions conceived by these virtual bots are weird and even absurd, but that’s kind of the point. As the abilities of these self-learning systems increase in power and scope, they’ll come up with things humans never would have thought of. Which is actually kind of scary.”

COMPUTING

IBM Pushes Beyond 7 Nanometers, Uses Graphene to Place Nanomaterials on Wafers
Dexter Johnson | IEEE Spectrum
“Four years ago, IBM announced that it was investing US $3 billion over the next five years into the future of nanoelectronics with a broad project it dubbed ‘7nm and Beyond.’ With at least one major chipmaker, GlobalFoundries, hitting the wall at the 7-nm node, IBM is forging ahead, using graphene to deposit nanomaterials in predefined locations without chemical contamination.”

BRAIN-MACHINE INTERFACES

The Pentagon’s Push to Program Soldiers’ Brains
Michael Joseph Gross | The Atlantic
DARPA has dreamed for decades of merging human beings and machines. …Within decades, neurotechnology could cause social disruption on a scale that would make smartphones and the internet look like gentle ripples on the pond of history. Most unsettling, neurotechnology confounds age-old answers to this question: What is a human being?”

TRANSPORTATION

Why You Have (Probably) Already Bought Your Last Car
Justin Rowlatt | BBC
“Yes, it’s a big claim and you are right to be skeptical, but the argument that a unique convergence of new technology is poised to revolutionize personal transportation is more persuasive than you might think.”

GENETICS

So Much Genetic Testing. So Few People to Explain It to You.
Megan Molteni | Wired
“Today, with precision medicine going mainstream and an explosion of apps piping genetic insights to your phone from just a few teaspoons of spit, millions of Americans are having their DNA decoded every year. That deluge of data means that genetic counselors—the specialized medical professionals trained to help patients interpret genetic test results—are in higher demand than ever.”

Image Credit: Forance / Shutterstock.com