The Democratization of AI Is Putting Powerful Tools in the Hands of Non-Experts

The shortage of qualified data scientists is often highlighted as one of the major handbrakes on the adoption of big data and AI. But a growing number of tools are putting these capabilities in the hands of non-experts, for better and for worse.

There’s been an explosion in the breadth and quality of self-service analytics platforms in recent years, which let non-technical employees tap the huge amounts of data businesses are sitting on. They typically let users carry out simple, day-to-day analytic tasks—like creating reports or building data visualizations—rather than having to rely on the company’s data specialists.

Gartner recently predicted that workers using self-service analytics will output more analysis than professional data scientists. Given the perennial shortage of data specialists and the huge salaries they command these days, that’s probably music to the ears of most C-suite executives.

And increasingly, it’s not just simple analytic tasks that are being made more accessible. Driven in particular by large cloud computing providers like Amazon, Google, and Microsoft, there are a growing number of tools to help beginners start to build their own machine learning models.

These tools provide pre-built algorithms and intuitive interfaces that make it easy for someone with little experience to get started. They are aimed at developers rather than the everyday business users who use simpler self-service analytics platforms, but they mean it’s no longer necessary to have a PhD in advanced statistics to get started.

Most recently, Google released a service called Cloud AutoML that actually uses machine learning itself to automate the complex process of building and tweaking a deep neural network for image recognition.

They aren’t the only ones automating machine learning. Boston-based DataRobot lets users upload their data, highlight their target variables, and the system then automatically builds hundreds of models based on the platform’s collection of hundreds of open-source machine learning algorithms. The user can then choose from the best performing models and use it to analyze future data.

For the more adventurous developers, there are a growing number of open-source machine learning libraries that provide the basic sub-components needed to craft custom algorithms.

This still requires considerable coding experience and a brain wired for data, but just last month Austin-based CognitiveScale released Cortex, which they say is the first graphical user interface for building AI models.

Rather than having to specify what they want by writing and combining endless lines of code, users can simply drop various pre-made AI “skills” like sentiment analysis or natural language processing into a honeycomb-like interface with lines between the cells denoting data flows. These skills can be combined to build a more complex model that is able to carry out high-level tasks, like processing insurance claims using text analysis.

Just as replacing esoteric command-line interfaces with visual GUIs like Windows greatly expanded the number of people who were able to engage with personal computers, the creators of Cortex say their tool could have a similar effect for AI.

All of these attempts to democratize access to advanced analytics could go a long way to speeding up its adoption across all kinds of businesses. Putting these tools in the hands of non-experts could mean companies that don’t have the resources to compete for the top data professionals can still reap the benefits of AI.

It also frees up experts to work on the most cutting-edge applications of the technology rather than getting bogged down on more mundane but commercially important projects.

But there are also risks that need to be considered before setting non-experts loose on an organization’s data sets. Data science isn’t just about knowing how to build an algorithm. It’s about understanding how to collect data effectively, how to prepare it for analysis, and the strengths and limitations of various statistical techniques.

The old adage “garbage in, garbage out” highlights the danger of putting powerful analytics in the hands of those who don’t fully understand the tools they are using or the provenance of their data and the potential errors or biases that may be hidden in it.

Writing in Forbes, Brent Dykes from self-service analytics platform Domo points out that businesses should not expect the democratization of these technologies to magically turn their employees into effective “citizen data scientists.” He says they need to be coupled with solid training on how to interpret and analyze data properly, as well as robust data governance to make sure the data being used is reliable.

That will require trained data scientists to play a critical oversight role to ensure that the proliferation of AI provides businesses with reliable insights rather than leading them astray.

Image Credit: fatmawati achmad zaenuri /

Maker Pro News: Startup Outsiders, French Maker Pros, New Hardware from Particle and More

This week, hear from startup outsiders and french maker pros, and learn more about new hardware from Particle.

Read more on MAKE

The post Maker Pro News: Startup Outsiders, French Maker Pros, New Hardware from Particle and More appeared first on Make: DIY Projects and Ideas for Makers.

What Roboticists Are Learning From Early Generations of Lifelike Humanoid Robots

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov /

This Week’s Awesome Stories From Around the Web (Through February 17)


In the Future We Won’t Edit Genomes—We’ll Just Print Out New Ones
Bryan Walsh | MIT Technology Review
“’Over the next 10 years synthetic biology is going to be producing all kinds of compounds and materials with microorganisms,’ says Boeke. ‘We hope that our yeast is going to play a big role in that.’…One day, though, we may routinely design genomes on computer screens. Instead of engineering or even editing the DNA of an organism, it could become easier to just print out a fresh copy. Imagine designer algae that make fuel; disease-proof organs; even extinct species resurrected.”


Waiting for the Robot Rembrandt
Hideki Nakazawa | Nautilus
“When art is made to satisfy the needs of a third party—in this case, the computer programmer employed by the artist—it is illustration or commercial art, not fine art. If fine art is ever to be made by AI, it must be its own: produced by machines autonomously, independently, and actively for the machine’s own sake and with the machine’s own aesthetics. Only in that case would the art not be a passive product of human creation.”


How the Private Space Industry Could Take Over Lower Earth Orbit—and Make Money Off It
Loren Grush | The Verge
“Lower Earth orbit is a great testing ground for the technologies needed for missions to the Moon and Mars, which NASA has its eye on. With private stations, NASA could buy time and space on these modules to continue doing tests in microgravity. Private space stations could also be used to create entirely new types of revenue, serving as places to do in-space manufacturing of satellites or platforms for tourists to visit.”


Exposing the Power Vampires in Self-Driving Cars
Peter Fairley | IEEE Spectrum
“However, autonomy’s energy bill ate up only part of the overall energy reduction expected from the autonomous vehicles’ ability to drive smarter driving—such as platooning of vehicles through intersections and on highways to cut congestion in cities and aerodynamic drag on the highway. As a result the modeled Ford sedans still delivered a 6-9 percent net energy reduction over their life cycle with autonomy added, and promised a comparable reduction in greenhouse gas emissions.”


Should Congress Create a Crypto-Cop?
Peter J. Henning | The New York Times
“Any theft can be prosecuted under a range of federal laws, including the Computer Fraud and Abuse Act and the wire fraud statute because cryptocurrencies are a form of property, even though they are intangible. But criminal prosecution is not a particularly effective weapon for battling hackers who plunder cryptocurrency wallets. Many of the exchanges are outside the United States, so finding those responsible is a challenge. Regulating the exchanges would be a significant step toward ensuring there is at least some protection for those buying and selling cryptocurrencies.”

Image Credit: MJgraphics /

Tips of the Week: Choosing a Sewing Needle, Cheap Craft Shelving, Working with Epoxies, and Generating Gears

Sewing, equipment cleaning, storage, epoxy basics, gear-making, and more. A little something for everyone this week.

Read more on MAKE

The post Tips of the Week: Choosing a Sewing Needle, Cheap Craft Shelving, Working with Epoxies, and Generating Gears appeared first on Make: DIY Projects and Ideas for Makers.

Particle Jumps Into Mesh Networking With Three New Boards (And Lower-Cost LTE)

Meet the new Argon, Boron, and Xenon boards.

Read more on MAKE

The post Particle Jumps Into Mesh Networking With Three New Boards (And Lower-Cost LTE) appeared first on Make: DIY Projects and Ideas for Makers.

Video Friday: Boston Dynamics, Autonomous Drone, and Robot Drum Man

Your weekly selection of awesome robot videos

Influenza: The Search for a Universal Vaccine

The current 2017-18 flu season is a bad one. Hospitalization rates are now higher than in recent years at the same point, and infection rates are still rising. The best line of defense is the seasonal influenza vaccine. But H3N2 viruses, like the one that’s infecting many people this year, are particularly hard to defend against, and this year’s shot isn’t very protective against H3N2.

Producing an effective annual flu shot relies on accurately predicting which flu strains are most likely to infect the population in any given season. It requires the coordination of multiple health centers around the globe as the virus travels from region to region. Once epidemiologists settle on target flu strains, vaccine production shifts into high gear; it takes at least six months to generate the more than 140 million doses necessary for the American population.

Chart: The Conversation, CC-BY-ND Source: Centers for Disease Control and Prevention

Incorrect or incomplete epidemiological forecasting can have major consequences. In 2009, while manufacturers were preparing vaccines against the forecasted strains, an unanticipated H1N1 influenza virus emerged. The prepared seasonal vaccine didn’t protect against this unanticipated virus, causing worldwide panic and over 18,000 confirmed deaths. This was likely only a fraction of the true number of deaths, estimated to exceed 150,000. Better late than never, a vaccine was eventually produced against the emergent H1N1, requiring a second flu shot that year.

Given that influenza has caused the majority of pandemics over the past 100 years—including the 1918 flu that resulted in as many as 50 million deaths—we’re left with the question: Can scientists produce a “universal” vaccine? An ideal version would be capable of protecting against diverse strains of influenza and wouldn’t require a yearly shot for you.

Vaccines Prime the Immune System to Fight

By the 18th century, and arguably much earlier in history, it was commonly known that a survivor of smallpox would not come down with it again upon subsequent exposure. Somehow, infection conferred immunity against the disease. In fact, people recognized that milkmaids who came into contact with cowpox-ridden cattle would similarly be protected from smallpox.

In the late 1700s, farmer Benjamin Jesty inoculated his family with cowpox, effectively immunizing them against smallpox. Physician Edward Jenner then catapulted humanity into a new age of immunology when he lent scientific credence to the procedure.

So if one inoculation of cowpox or one exposure to (and survival of) smallpox confers a decade’s worth or even lifelong immunity, why are individuals encouraged to receive the flu vaccine every year?

The answer lies in how quickly the influenza virus’s anatomy changes. Each virus consists of a roughly spherical membrane encapsulating constantly mutating genetic material. This membrane is peppered with two types of “spikes”: hemagglutinin, or HA, and neuraminidase, or NA, each made up of a stem and a head. HA and NA help the virus with infection by binding to host cells. They mediate the entry of the virus into the cell and, once it replicates, the eventual exit.

Once a doctor injects a vaccine, an individual’s immune system gets to work by making antibodies that recognize, for example, the hemagglutinin it contains. The next time that hemagglutinin shows up—such as in the form of the virus strains the vaccine mimicked—the body’s immune cells recognize them and fight them off, preventing infection.

For vaccine developers, one frustrating characteristic about influenza’s mutating genome is how rapidly HA and NA change. These constant alterations are what send them back to the drawing board for new vaccines every flu season.

Different Methods to Design a Vaccine

The smallpox vaccine was one of the earliest to use the “empirical paradigm” of vaccinology—the same strategy we largely use today. It relies on a trial-and-error approach to mimic the immunity induced by natural infection.

In other words, vaccine developers believe the body will react to something in the inoculation. But they don’t focus on which specific patch of the virus is causing that immune response. It doesn’t really matter if it’s a reaction to a small patch of HA that many strains share, for instance. When using an entire virus as starting material, it’s possible to get many different antibodies recognizing many different parts of the virus used in the vaccine.

The seasonal flu shot generally fits into this empirical approach. Each year, epidemiologists forecast which flu strains are most likely to infect populations, typically settling on three or four. Researchers then attenuate or inactivate these strains so they can act as the mimics in that year’s influenza vaccine without giving recipients the flu. The hope is that an individual’s immune system will respond to the vaccine by creating antibodies that target these strains; then when he or she comes into contact with the flu, the antibodies will be waiting to neutralize those strains.

But there’s a different way to design a vaccine. It’s called rational design and represents a potentially game-changing paradigm shift in vaccinology.

The goal is to design some molecule—or immunogen—that can trigger the production of effective antibodies without requiring exposure to the virus. Relative to current vaccines, the engineered immunogen may even allow for more specific responses, meaning the immune response targets particular regions of the virus. There’s the possibility of greater breadth, too, meaning it could target multiple strains or even related viruses.

This strategy works to target specific epitopes, or patches of the virus. Since antibodies work by recognizing structures, the designers want to emphasize to the immune system the structural properties of the immunogens they’ve created. Then researchers can try to design candidate vaccines with those structures in hopes they’ll provoke the immune system to produce relevant antibodies. This path might let them assemble a vaccine that elicits a more effective and efficient immune response than would be possible with the traditional trial-and-error method.

Promising headway has been made in vaccine design for respiratory syncytial virus using this new rational paradigm, but efforts are still underway to use this general approach for influenza.

Toward a Universal Flu Vaccine

In recent years, researchers have isolated a number of potent, infleunza-neutralizing antibodies produced in our bodies. While the antibody response to influenza is primarily directed at the head of the HA spike, several have been found that target HA’s stem. Since the stem is more constant across viral strains than the head, this could be flu’s Achilles’ heel, and antibodies that fix on this region may be a good target for vaccine design.

Researchers are pursuing a number of approaches that could cause the body to produce these antibodies of interest before becoming infected. In one strategy, scientists attached lab-made copies of hemagglutanin stems to a spherical protein nanoparticle. The resultant structure isn’t a virus and doesn’t even contain any viral genetic material. But it looks a lot like a virus to the body’s immune system, and so elicits a good antibody response. And, because only the stem is attached to the nanoparticle, the immune system can focus the antibody response on these regions which are more similar from strain to strain than the head. This general approach has seen success both in mice and ferrets, but further testing is required before it can be tried in people.

With current technology, there may never be a “one and done” flu shot. And epidemiological surveillance will always be necessary. However, it is not inconceivable that we can move from a once-per-year model to a once-every-10-years approach, and the field has been making huge strides to achieve this.The ConversationThis is an updated version of an article originally published on Jan. 11, 2017.

Ian Setliff, Ph.D. Candidate in Chemical & Physical Biology, Vanderbilt Vaccine Center, Vanderbilt University and Amyn Murji, Ph.D. Student in Microbiology and Immunology, Vanderbilt Vaccine Center, Vanderbilt University

This article was originally published on The Conversation. Read the original article.

Image Credit: CNK02 /

3D Printing T’Challa’s Spear from Black Panther

After watching Black Panther, 3D print T’Challa’s spear from the movie. It’ll make for an excellent prop for a Halloween costume or cosplay.

Read more on MAKE

The post 3D Printing T’Challa’s Spear from Black Panther appeared first on Make: DIY Projects and Ideas for Makers.

Video: Using Resin to Make Faux Stained Glass

If you’re uncomfortable working with stained glass, colored resin offers a safer alternative that achieves similar effects.

Read more on MAKE

The post Video: Using Resin to Make Faux Stained Glass appeared first on Make: DIY Projects and Ideas for Makers.