Home » robot » Recent Articles:

Robot/lab – 1er Incubateur de robotique d’Europe (dont objets connectés) – © Up’Magazine + actu

article_websummitTout le monde connaît mon amour pour la robolution ! voici une info à suivre – la naissance de Robot/lab – Sans compter l’engagement de l’Etat avec le prochain lancement de la cité des objets connectés  : cf.

Industrie 5.0 : Axelle Lemaire annonce que la Cité des objets connectés sera lancée pendant le CES, en janvier 2015 (L’Usine Digitale) : http://www.proame.net/industrie-5-0-axelle-lemaire-annonce-que-la-cite-des-objets-connectes-sera-lancee-pendant-le-ces-en-janvier-2015-lusine-digitale/ ….

CES3et le CES innovation 2015 à Las Vegas qui va nous réserver encore des surprises..Lire a ce sujet cet excellent papier  sur les grands trends et la belle place française : http://www.fabernovel.com/2014/12/02/why-ces-2015-innovation-awards-need-to-be-on-your-radar/

Capture d’écran 2014-12-12 à 18.21.42Bref think internet des objets – robots …Maryline

Article tous droits réservés Up’Mag

Le premier incubateur robotique d’Europe ouvre à Paris dans le premier arrondissement pour centraliser en un lieu unique toute l’énergie pour donner vie et faire grandir les robots et objets connectés de demain.

Le Robot-Lab réunit les compétences pour créer un écosystème propre à aider l’émergence de projets sélectionnés. Il propose aux entreprises incubées un soutien matériel, humain, financier et technologique : locaux équipés, accompagnement dans les démarches administratives et financières (aides publiques, levée de fonds..) et aux lancements de produits (communication, fabrication et distribution).  Il a pour objectif d’aider la communauté de makers français à passer du rêve, à la réalisation jusqu’au succès de leur projet. En proposant le meilleur écosystème qui soit, Robot Lab veut faciliter la création d’entreprises et d’emplois tout en valorisant au mieux l’innovation française dans le domaine de la robotique.
Véritable hub pour l’écosystème robotique, il attire aussi entreprises et investisseurs : les entreprises pourront y découvrir les dernières avancées technologiques, et ont la possibilité d’accompagner un projet incubé ou bien d’incuber leur propre projet. Les investisseurs viennent au Robot-Lab découvrir les dernières initiatives et soutenir humainement et financièrement la pépite de leur choix.

Ce premier incubateur robotique et objet connecté s’installe d’abord en plein coeur de Paris dans un environnement favorable à la création, dans les locaux du 10 rue de la Coquillère à Paris 1er. Il s’agrandira ensuite avec une antenne à Lyon et à Toulouse.

Capture d’écran 2014-12-12 à 18.26.42Comment fonctionne le Robot-Lab ?

Il s’agit d’une structure composée d’un atelier mécanique, d’une salle événementielle et de bureaux plus traditionnels. Il offre aux projets sélectionnés et incubés des ressources technologiques, un accompagnement dans l’obtention d’aides publiques, un réseau de business angels et de mentors, ainsi qu’un service d’aide à la gestion, à la communication, à la fabrication et la distribution.
Cet écosystème devrait permettre de créer des liens forts entre les startupers, les makers, les entreprises et les investisseurs.  Les créateurs bénéficient de tous ces services en échange d’une partie de leur capital.

Un véritable tremplin pour la robotique française

Lancé en phase test depuis le 1er septembre 2014 à Paris, le Robot Lab soutient déjà quelques startups et a signé des partenariats stratégiques.
Le CRIIF, structure reconnue développant des projets robotiques depuis 1987, s’engage à soutenir techniquement chaque projet incubé.
« Être au cœur des dernieres innovations est un rêve pour notre équipe et nos clients ! » selon Rodolphe Hasselvander, le fondateur, qui a décidé de s’impliquer pleinement dans ce projet en installant physiquement son équipe au sein du Robot-Lab.

RobotShop.com, le premier distributeur mondial de robots, s’associe à cette démarche en devenant le partenaire distributeur du Robot-Lab, permettant ainsi à toutes les startups incubées d’avoir une solution immédiate pour commercialiser leur offre. Mario Tremblay, PDG de RobotShop, témoigne: « Nous nous engageons à mettre en avant et distribuer les offres sélectionnées et arrivées à maturité soutenues par le Robot Lab, nous offrons ainsi un formidable relais de croissance pour ces makers qui auront eu la chance de développer leur projet dans un écosysteme unique ».

Robot Capital, un fond d’investissement privé dédié à la robotique s’engage à soutenir en tant que lead investor les meilleurs projets. De nombreux partenariats avec des fonds, des entrepreneurs, des écoles et des entreprises sont actuellement en cours d’élaboration.

A l’origine de cette initiative, Alexandre Ichaï, un développeur informatique devenu serial entrepreneur puis investisseur, souhaite permettre aux créateurs d’utiliser au maximum ce qui existe pour qu’ils puissent se focaliser sur l’innovation :
« D’un point de vu humain, l’expérience avérée d’entrepreneurs est déterminante dans la réussite d’un projet. D’un point de vu technique, de nombreuses sociétés développent leur technologie en partant de zéro alors que d’autres sociétés non concurrentes l’ont déjà développée ! D’un point de financier, trouver les partenaires stratégiques permet aussi de trouver les fonds. Bref, dans le partage les synergies sont infinies ».

www.robot-lab.org

 

‘Robo Brain’ will teach robots everything from the Internet

Robo Brain is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals, all being translated and stored in a robot-friendly format.

The reason: to serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave.

Robotics researchers like Ashutosh Saxena, assistant professor of computer science at his Cornell University and his associates at Cornell’s Personal Robotics Lab have been teaching them these things one at a time (which KurzweilAI has covered over the last two years in four articles).

For example, how to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

Now it’s all being automated, cloudified, and crowdsourced.

“If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” said Saxena.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, say Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

RSS2014: 07/16 15:00-16:00 Early Career Spotlight: Ashutosh Saxena (Cornell): Robot Learning

Structured deep learning

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12–16 in Berkeley, and has launched a website for the project at http://robobrain.me

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture.

Robo Brain knows that chairs are something you can sit on, but that a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges).

The nodes could represent objects, actions or parts of an image, and each one is assigned a probability — how much you can vary it and still be correct.

In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those limits. “The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries,” said Aditya Jami, a visiting researcher art Cornell, who designed the large-scale database for the brain. Perhaps something that looks like a chart of relationships between Facebook friends, but more on the scale of the Milky Way Galaxy.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The project is supported by the National Science Foundation, The Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States competitive in the world economy.

IA – robots : Crowdsourcing for robots

The UW’s robot builds a turtle model (credit: University of Washington)

Crowdsourcing can be a quick and effective way to teach a robot how to complete tasks, University of Washington computer scientists have shown.

Learning by imitating a human is a proven approach to teach a robot to perform tasks, but it can take a lot of time. But if the robot could learn a task’s basic steps, then ask the online community for additional input, it could collect more data on how to complete this task efficiently and correctly.

So the team designed a study that taps into the online crowdsourcing community to teach a robot a model-building task. To begin, study participants built a simple model — a car, tree, turtle and snake, among others — out of colored Lego blocks. Then, they asked the robot to build a similar object.

But based on the few examples provided by the participants, the robot was unable to build complete models. So to gather more input about building the objects, the robots turned to the crowd.

They hired people on Amazon Mechanical Turk, a crowdsourcing site, to build similar models of a car, tree, turtle, snake and others. From more than 100 crowd-generated models of each shape, the robot searched for the best models to build based on difficulty to construct, similarity to the original and the online community’s ratings of the models.

The robot then built the best models of each participant’s shape.

“We’re trying to create a method for a robot to seek help from the whole world when it’s puzzled by something,” said Rajesh Rao, an associate professor of computer science and engineering and director of the Center for Sensorimotor Neural Engineering at the UW. “This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world.”

Example Imitation Learning Scenario. Intermediate results from following the example imitation learning scenario are visualized. The results can be seen in order by following solid arrows. The dotted arrow show the scenario of directly imitating the user’s original demonstration which, the method was compared against. (credit: Michael Jae-Yoon Chung et al.)

Goal-based imitation

This type of learning is called “goal-based imitation,” and it leverages the growing ability of robots to infer what their human operators want, relying on the robot to come up with the best possible way of achieving the goal when considering factors such as time and difficulty.

For example, a robot might “watch” a human building a turtle model, infer the important qualities to carry over, then build a model that resembles the original, but is perhaps simpler so it’s easier for the robot to construct.

Study participants generally preferred crowdsourced versions that looked the most like their original designs. In general, the robot’s final models were simpler than the starting designs — and it was able to successfully build these models, which wasn’t always the case when starting with the study participants’ initial designs.

The team applied the same idea to learning manipulation actions on a two-armed robot. This time, users physically demonstrated new actions to the robot.

Then, the robot imagined new scenarios in which it did not know how to perform those actions. Using abstract, interactive visualizations of the action, it asked the crowd to provide new ways of performing actions in those new scenarios.

More complex tasks

The UW team is now looking at using crowdsourcing and community-sourcing to teach robots more complex tasks such as finding and fetching items in a multi-floor building. The researchers envision a future in which our personal robots will engage increasingly with humans online, learning new skills and tasks to better assist us in everyday life.

“Service robots in the home or in workplaces will be faced with tremendous variability in the situations they need to operate in,” said Maya Cakmak, Assistant Professor, Computer Science & Engineering, University of Washington, in an email to KurzweilAI. “Crowdsourcing can be a scalable solution for customizing these robots to properly function in each particular environment, based on the preferences of each particular user.

“Our work also proposes a new framework in which the crowdsourced task is seeded by local users, rather than being directly requested by researchers. This can lead to new theoretical work on the incentive models behind this collaboration between end-users and crowd workers.

“Our work is complementary to other work on using crowdsourcing to train robots, in that different research groups have looked into crowdsourcing different learning problems. We have looked into high-level task descriptions (ICRA paper) and two-armed manipulation actions (HCOMP paper); while other groups explored learning executions of natural language commands (Cornell) or learning object grasps (WPI).

So when can we expect this innovation to be available commercially? “I would argue that robotics companies will initially provide remote robot training services themselves,” said Cakmak. ”This is just around the corner, perhaps within two to three years. Crowdsourcing of such services might take more time and I think the main roadblocks are quality control and privacy of the end-users.”

Other research teams at Brown University, Worcester Polytechnic Institute and Cornell University are working on similar ideas for developing robots that have the ability to learn new capabilities through crowdsourcing.

The research team presented its results at the 2014 Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation in Hong Kong in early June.This work will be presented at the Conference on Human Computation and Crowdsourcing in November.

This research was funded by the U.S. Office of Naval Research and the National Science Foundation.

Capture d’écran 2014-07-04 à 16.22.31Vidéo ici : https://www.youtube.com/watch?v=x30Qw9Vry7k

Abstract of Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation presentation

Although imitation learning is a powerful technique for robot learning and knowledge acquisition from naïve human users, it often suffers from the need for expensive human demonstrations. In some cases the robot has an insufficient number of useful demonstrations, while in others its learning ability is limited by the number of users it directly interacts with. We propose an approach that overcomes these shortcomings by using crowdsourcing to collect a wider variety of examples from a large pool of human demonstrators online. We present a new goal-based imitation learning framework which utilizes crowdsourcing as a major source of human demonstration data. We demonstrate the effectiveness of our approach experimentally on a scenario where the robot learns to build 2D object models on a table from basic building blocks using knowledge gained from locals and online crowd workers. In addition, we show how the robot can use this knowledge to support human-robot collaboration tasks such as goal inference through object-part classification and missing-part prediction. We report results from a user study involving fourteen local demonstrators and hundreds of crowd workers on 16 different model building tasks.

Voiture autonome : How to make robots and self-driving cars think faster

Andrea Censi, a research scientist in MIT’s Laboratory for Information and Decision Systems, has developed a new type of camera sensor system that can take measurements a million times a second.

The new system combines a Dynamic Vision Sensor (DVS) ) (to rapidly detect changes in luminance) with a conventional CMOS-camera sensor (to provide the absolute brightness values, or grayscale values).

An autonomous vehicle using a standard camera to monitor its surroundings might take about a fifth of a second to update its location — not fast enough to handle the unexpected. With an event-based sensor, the vehicle could update its location every thousandth of a second or so, allowing it to perform much more nimble maneuvers.

“In a regular camera, you have an array of sensors, and then there is a clock,” Censi explains. “If you have a 30-frames-per-second camera, every 33 milliseconds the clock freezes all the values, and then the values are read in order.”

With an event-based sensor, by contrast, “each pixel acts as an independent sensor,” Censi says. “When a change in luminance — in either the plus or minus direction — is larger than a threshold, the pixel says, ‘I see something interesting’ and communicates this information as an event. And then it waits until it sees another change.”

UPDATE 5/30/2014: Added description of the two different sensors used.


Abstract of Technical Report

The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it pos- sible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion.

Roombot : des robots lego qui se transforment en meubles !

Des chercheurs du Laboratoire de biorobotique de l’École polytechnique fédérale de Lausanne (EPFL) ont imaginé  des mini  éléments robotiques modulables  permettant de fabriquer des meubles reconfigurables.

Ce projet, intitulé « Roombots », est destine notamment les personnes à mobilité réduite. Il vient de faire l’objet d’une publication dans la revue Robotics and Autonomous Systems.

Selon Auke Ijspeert, directeur du Biorob : « A terme, il suffira simplement de programmer la disposition d’une chambre ou d’une salle de conférence, puis de laisser les modules faire le travail »

roombotsA l’image des Legos, les pièces du Roombots peuvent s’imbriquer les unes dans les autres pour créer différentes structures comme le montre cette vidéo https://www.youtube.com/watch?v=0XDpT8hI89k

keyvisual.phpSource : EPFL :  http://biorob.epfl.ch/

Article également : http://www.kurzweilai.net/roombots-transform-into-movable-furniture-and-objects

Encore google ! : Google’s self-driving car gathers nearly 1 GB/sec

(Credit: Google)

“Google’s self-driving car gathers 750 megabytes of sensor data per SECOND! That is just mind-boggling to me. Here is a picture of what the car ‘sees’ while it is driving and about to make a left turn. It is capturing every single thing that it sees moving — cars, trucks, birds, rolling balls, dropped cigarette butts, and fusing all that together to make its decisions while driving. If it sees a cigarette butt, it knows a person might be creeping out from between cars. If it sees a rolling ball it knows a child might run out from a driveway. I am truly stunned by how impressive an achievement this is.”— IdeaLab founder/CEO Bill Gross.

Add to that: real-time data from street view, GPS, and Google maps — as shown below from a recent Google patent award — and you’ve got one humungous graphics processing system on board.

Now what if some elements of all this data could also be projected on a special windshield display or on Google Glass for driver override, when needed? Add to that: weather and traffic reports ahead, police-scanner data (to avoid a road chase in progress, let’s say), news reports mentioning local events, oh, and Yelp reports, and Find My Friends or Latitude popup pics, and throw in a live HD action cam (I’m experimenting with a Contour+2 with live HDMI streaming — I need one more for rear-view pics — more on that later) and what about a panorama cam and …. OK, you get the idea. (Would adding an Oculus Rift be over the top?)

 

Amara D. Angelica is editor of KurzweilAI.

Un blog pour l’avenir


Non au futur (prévision froide). Oui à l'avenir (action humaine). Dixit le Petit Prince, "l'avenir, tu n'as pas à le prévoir, tu dois te le permettre".

Ce blog est dédié aux idées d'avenir positives, aux changements. La prospective est à la fois une science de synthèse pluridisciplinaire et un art pour défricher de nouveaux territoires, repérer des courants forces, explorer des imaginaires...

C'est surtout un outil Eureka pour inventer de nouveaux produits et services, sublimer ou mythifier une marque et ses produits, créer la valeur de la valeur....

Vive l'avenir, car ce qui est génial, c'est que tout commence et que tout est possible !

Maryline

Défilant