Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google Brain releases two large datasets for robotics research (plus.google.com)
230 points by hurrycane on Aug 23, 2016 | hide | past | favorite | 15 comments


I'm actually really excited about this because this is the kind of data that a tiny startup couldn't just get and helps people compete. I am still worried that nobody will be able to compete with the big guys long term in RL, simply because they have the money and access to build these huge training sets, but at least this gives someone somewhat of a chance.

Kudos to Vincent, Sergey, Chelsea and Laura for promoting openness in ML research!!


The resources one has at his or her disposal dictate their solution space, and not necessarily the quality of the solution. Google is uniquely good at doing stuff at scale. Some day a grad student who doesn't have the money to buy a GPU might instead invent a form of sample-efficient RL that the big guys never even thought of :)



Damn, 800,000 grasp attempts? Pretty nuts that they need this many... but I guess without tactile feedback, only relying on visual feedback you need this many attempts to get it right.


Babies take a while to learn how to grasp things, it's believable that the number of attempts they make is in the 10,000s. 1 grasp attempt a minute, 8 hours a day for 3 months is 43,000 attempts.

Of course they are learning a lot of other stuff at the same time so it's not really comparable.


This is something that I think people often miss when they compare machine learning to human performance. Humans spend a LOT of time in their early learning and calibrating phase. Like, it's our full time job, 365 days a year, for several years. One interaction every ~5sec, for 12 hours a day, seems a modest estimate. That's over 3 million training examples per year.


Well, it's a hard problem. Looking at the data, could you come up with a way to learn control more quickly?

Ie, given a series of pixel intensities, could you send commands to each motor to get them to grasp something?


Do cutting-edge robots still not incorporate tactile feedback?


I am sure that many do have tactile feedback, but every touch switch or pressure sensor is a subsytem that you must decide is worth the added complexity. I doubt any simple approach works as well has human skin.

As always, the thing that gets better fastest is the computing. The mechanics and the sensors improve too -- but according to a slower set of laws.


Awesome - I think this could be really useful for action recognition. Collecting large video datasets is really challenging and google's robot array is a great way to repeatedly create these kinds of custom datasets.


Very cool, but isn't this a reinforcement learning task? Don't you need access to the machines to get them to learn?


Only for some algorithms. Many RL algorithms can learn off-policy. Or you can treat it as a supervised problem: "given this image, predict the taken action". (Think of AlphaGo initially being trained to predict human players' next moves based on a large KGS corpus of games; no interaction or access to Go games required.)

Admittedly, I'm not entirely sure what you would do with any of that after all the learning is done, if you don't eventually have one of those very expensive robot arms to use.


As far as I know most robotics research is done offline, i.e. in a simulation (e.g. via gazebo, ROS, etc.). Obviously one has and does make the step to reality eventually (which is always messy), but you can go miles using simulation only.

I would even say the learning curve from nothing (however being proficient in basic CS and programming) to successful demonstrations and experiments in a simulator is steeper, than from simulation to real robot. The first step is time and the second step money consuming.


This is accurate.

I've seen people use multiple levels of simulation complexity. That way they can learn the basics in a simple environment and use that as a prior for the more complex (and therefore slower/more costly to run) simulators. It's like bootstrapping up to the real thing.

In general being able to run things in the simulator is critical to software testing. If your only method of doing integration testing and parameter optimization is the real robot, it will be a bad time, quite possibly leading to a crashed robot.


I totally agree.

Personal rant incoming:

I think the ideas and intentions of ROS, gazebo and co. are great, but in my humble opinion things are a lot more complicated than they should be (as always in the world of software).

First I think that the robotics world has a non-free vibe going on, in particular from industries who are big in the robotics building business. They mostly ship their own broken (sometimes Windows-only) API for their robot and it is mostly a mess. In particular they often do too much stuff, like providing proprietary shitty cartesian path planning or developing strange in-house communication protocols. I know that this is hard realtime territory and stuff is hard, but I feel definitely a tension between the mostly robotics researcher software crowd and the providers of the "real" thing, the actual robot, e.g. a manipulator, which is still in the 100.000 euros.

Secondly I am afraid that ROS is going down the Corba path. The very fact that they ship their own build system (an ongoing debatte on the ros mailing list) rings alarm bells for me and I hope that they can move forward. However, kudos to the developers! Robotics is a hard and interdisciplinary topic, so one inherits all the problems of the software AND hardware world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: