25 teams compete on a disaster-simulated course, and one winning robot will take home $2 million. CuriosityStream will bring you top of the line coverage of the event. Get up close with the robots, meet the brains behind the technology - and explore the past, present, and future of robots with our new lineup of Science/Technology programming. Join CuriosityStream and DARPA as we discover which robot will save the day!
From Lyndsey Gilpin for TechRepublic: The DARPA Finals will be held in Pomona, California from June 5-6, and the robots that come out of it could make some big impacts (or take over the world). Here's a summary of what you should know. 1. It began with the desire to improve humanitarian assistance and disaster relief The Fukushima disaster in Japan in 2011 was an inspiration for the competition, according to Dr. Gill Pratt, the DRC program manager. The team realized we never know what the next disaster will be, but we need technology to help us better address these types of disasters with better tools and techniques. And robots have massive potential. "The particular part that we've chosen to focus on, here, is technology for responding during the emergency part of the disaster during the first day or two," Pratt said in a media briefing several weeks before the competition. "So this is not about, for instance, robotics for doing the restoration of the environment many, many weeks, years after the disaster, but rather the emergency response at the beginning." Cont'd..
By David Szondy for Gizmag: One of the biggest events at the recent 2015 IEEE International Conference on Robotics and Automation (ICRA) in Seattle was the first Amazon Picking Challenge, in which 31 teams from around the world competed for US$26,000 in prizes. The challenge set entrants with the real-world task of building a robot that can do the same job as an Amazon stock picker.According to Amazon Chief Technology Officer Peter Wurman, who initiated the challenge, the task of picking items off the shelf may seem simple, but it involves all domains of robotics. The robot has to capable of object and pose recognition. It must be able to plan its grasps, adjust manipulations, plan how to move, and be able to execute tasks while noticing and correcting any errors. This might suggest that the robots would need to be of a new, specialized design, but for the Picking Challenge, Amazon made no such requirement. According to one participant we talked to, the more important factors were sensors and computer modelling, so ICRA 2015 saw all sorts of robots competing, such as the general purpose Baxter and PR2, industrial arms of various sizes, and even special-built frames that move up, down, left or right to position the arm. Even the manipulators used by the various teams ranged from hooks, to hand-like graspers, and vacuum pickups. Continue reading for competition results:
A man paralyzed by gunshot more than a decade ago can shake hands, drink beer and play "rock, paper, scissors" by controlling a robotic arm with his thoughts, researchers reported. Two years ago, doctors in California implanted a pair of tiny chips into the brain of Erik Sorto that decoded his thoughts to move the free-standing robotic arm. The 34-year-old has been working with researchers and occupational therapists to practice and fine-tune his movements. It's the latest attempt at creating mind-controlled prosthetics to help disabled people gain more independence. In the last decade, several people outfitted with brain implants have used their minds to control a computer cursor or steer prosthetic limbs. Full Article:
ABB, a leading power and automation group, announced it acquired Gomtec GmbH to expand its offering in the field of collaborative robots. The parties agreed not to disclose financial terms of the transaction. Gomtec, based near Munich, Germany, is a privately held company that develops mechatronic systems combining mechanical, electrical, telecommunications, control and computer engineering for customers in diverse industries. It has 25 employees. Gomtec's technology platform will strengthen ABB's development of a new generation of "safe-by-design" collaborative robots that can be operated outside of cages or protective fencing, expanding opportunities to deploy them in new applications.
By David Szondy for Gizmag: On June 5 and 6, the 2015 DARPA Robotic Challenge (DRC) Finals will take place at Fairplex in Pomona, California. Open to the public, it will see 25 international teams compete for US$3.5 million in prizes as part of an effort to develop robots for disaster relief. Here's what to expect. This year's challenge will see 25 teams competing. Half of the teams are from the United States, five are from Japan, three from Korea, two from Germany, one from Italy, one from Hong Kong, and one from the People’s Republic of China. They will be vying for a US$3.5 million total of prizes; including a $2 million first prize, a $1 million second prize, and a $500,000 third prize. The robots will be of a wide variety with some humanoid, some four-legged, and some tracked, but all will need to operate free of external power, mechanical support, and limited communications with their controllers. The basic idea behind DRC 2015 is to make things much harder for the robots than previously.
By Sharon Gaudin for ComputerWorld: Worried that one day we'll have robot overlords? You're in good company. Reknowned physicist, cosmologist and author of A Brief History of Time, Stephen Hawking said this week that robots, powered by artificial intelligence (A.I.), could overtake humans in the next 100 years. Speaking at the Zeitgeist conference in London, Hawking said: "Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours," according to a report in Geek. This isn't the first time Hawking has spoken about the threat that comes along with machine learning, A.I. and robotics. In December, Hawking said, "the development of full artificial intelligence could spell the end of the human race."
Fanuc claims that it is the first robot manufacturer to produce a heavy-duty robot designed to work safely alongside humans. Its CR-35iA robot can perform tasks involving payloads of up to 35kg without needing the protective guards and fences that have previously been needed for robots with similar lifting capacities. Although there are already several other collaborative robots on the market, most are designed for much lower payloads. The new robot will stop automatically if it touches a human operator. A soft covering material also reduces the force of any impacts and prevents human operators from being pinched by the mechanism. And if the robot comes too close to an operator, they can simply push it away. The covering has a green colour to distinguish it from Fanuc’s usual yellow robots. The six-axis robot is designed for duties such as transferring heavy workpieces or assembling parts. By avoiding the need for safety barriers, it is claimed to improve production efficiencies and allow higher levels of automation.
Vicki Speed for Inside Unmanned Systems: It would seem that robotic systems could provide an extra measure of safety, as well as a higher level of efficiency and machine-consistent quality. Yet, to date, the use of robotic systems on construction jobsites has been minimal. The building industry, however, is looking with fresh eyes at robots—including at least three new systems expected to be available this year—with a focus on near-term efficiencies that make investment in the systems make sense. Demolition Days Among the first fully-realized applications of robots in the construction environment are those used to support work that comes at the end of a structure’s life, namely demolition. In fact, remotely operated demolition robots have been around for more than a decade. Robotic Building Blocks The short answer is, ‘Yes.’ There are robotic systems in development around the world that can lay bricks, set tile or finish concrete floors. Bionic Builders? While not autonomous systems, robotic exoskeletons, those high-tech wearable suits seen in futuristic movies that help mere mortals defend Earth against other beings, could be a very real part of tomorrow’s jobsite and a possible precursor to autonomous robots in the field.
Jared Newman for PCWorld: At the 2015 Build conference, Microsoft tried to prove that HoloLens is more than just a neat gimmick. The company showed off several new demos for its “mixed reality” headset, which can map digital imagery onto the user’s physical surroundings. While previous demos had focused on fun ideas like a virtual Mars walk and a living room-sized version of Minecraft, the Build presentation emphasized real-world applications for businesses and education. For instance, Microsoft showed how architects could use HoloLens to interact with 3D models, laid out virtually in front of them on a table. They might also be able to examine aspects of a building site at full scale, with virtual beams and walls rendered before their eyes. Not all the presentations were so serious. Microsoft also showed off an actual robot whose controls appeared in the virtual space above the robot’s head. Users could then create a movement pattern for the robot by tapping on the ground. Another demo showed how users could create their own personal screens that followed them around in real space.
From Festo Bionic: With the bionic butterflies, for the first time Festo combines the ultralight construction of artificial insects with collision-free flying behaviour in a collective. For coordination purposes, the eMotionButterflies make use of a guidance and monitoring system, which could be used in the networked factory of the future... ( additional info ) Like their natural role models, the BionicANTs work together under clear rules. They communicate with each other and coordinate their actions and movements among each other. The artificial ants thus demonstrate how autonomous individual components can solve a complex task together working as an overall networked system... ( additional info )
Run down of the state of AI from FastML: Let’s take a look at how advanced we are, really. Two representative and well known examples of the current state of the art are: Automatic image annotation using a combination of convolutional and recurrent neural networks DeepMind’s deep reinforcement learning for playing Atari games ( cont'd at FastML )
From Servocity: Simply mount your electronics using our innovative multi-board mounts that are compatible with a variety of micro-controllers; such as Raspberry Pi, Arduino and the SparkFun Redboard. The Runt Rovers™ are perfect for beginning light programming and educational applications... ( Servocity available options )
From Smashing Robotics: Spanish company Robotnik introduced earlier this week their very own RB-1 mobile manipulator. The robot is designed for indoor use in household as well as professional environments, and is brought to life by using well known Dynamixel Pro series servo actuators which add up to 13 degrees of freedom (DOF), depending on variant. It is well suited for remote manipulation or human assistance applications and can be fully autonomous or manually controlled... ( full article ) ( datasheet )
Two great examples of using Computer Vision to beat Super Hexagon. Super Hexagon is a really hard game. The goal of Super Hexagon is to control a small triangle which circles around a central hexagon (which occasionally collapses into a pentagon or square in the hexagon and hyper hexagon difficulty) attempting to avoid contact with incoming "walls". First example from Valentin Trimaille's Super Hexagon bot: Ray Casting Wall Detection The point is that a bot for this game makes a really nice image processing project to start learning OpenCV: simple shapes but lots of human disturbing effects, fast-paced game meaning real-time is required, very simple controls: rotate CW or CCW... ( full article ) Second example from Shaun LeBron's Super Hexagon Unwrapper: This project is written in Python. It employs Computer Vision algorithms provided by SimpleCV to establish a reference frame in the image. Then it warps (or "unwraps") the image based on that reference frame, using OpenGL fragment shaders... ( github code ) ( full explanation )
Records 421 to 435 of 436
Design & Development - Featured Product
Cutler has authored more than 8000 articles for a wide range of manufacturing periodicals, industrial publications, and business journals including most of the leading monthly trade publications.