From Quirkbot's kickstarter page : Quirkbot is a microcontroller toy that anyone can program.It is compatible with the open construction toy Strawbees and can be used along with readily available materials like regular drinking straws, LEDs, and hobby servos (motors) to create a wide variety of hackable toys suitable for anyone aged 10+.
From Raspberry Pi Foundation: Let’s get the good stuff out of the way above the fold. Raspberry Pi 2 is now on salefor $35 (the same price as the existing Model B+), featuring: A 900MHz quad-core ARM Cortex-A7 CPU (~6x performance) (Broadcom BCM2836) 1GB LPDDR2 SDRAM (2x memory) Complete compatibility with Raspberry Pi 1 In stock now at element14 . From Microsoft: We’re excited to announce that we are expanding our Windows Developer Program for IoT by delivering a version of Windows 10 that supports Raspberry Pi 2. This release of Windows 10 will be free for the Maker community through the Windows Developer Program for IoT... ( sign up page )
Mario Lives! An Adaptive Learning AI Approach for Generating a Living and Conversing Mario Agent: Also Robohub has links to the rest of AAAI videos here .
From AkihabaraNews and Robohub: Conceptually, it’s quite simple: joysticks, knobs, and switches manipulated on a custom radio controller direct the flexing and relaxing of pneumatic “artificial muscles” arranged along “arms and legs” that attach to a vehicle’s pedals and levers. When actuated, the robot replicates a human body’s controlling movements. Kowa Tech claims the system delivers input strength equal to that of human limbs and an overall operational capability 80% that of in-vehicle operation (with gains expected as the system matures). The 3-piece modular robot is weather and vibration-proof, weighs only about 30 kg (66 lbs.), easily installs in under 30 minutes, and can be removed in half that time. Current prototypes are operable from up to 200m, and commercial models are expected to reach 1 kilometer. Rounded out with video monitoring for an in-vehicle perspective, in theory ActiveRobo SAM is a near-comprehensive surrogate operator. Most of their demos and testing have thus far been limited to excavator and other heavy equipment work, but the company hopes to eventually make the system adaptable to any vehicle... ( full article )
From Anibit: We have adapted a graphical programming environment for the Arduino known as "Blocklyduino" to be tailored to Pololu's 3Pi Robot Platform. Blocklyduino is itself an adaptation of Blockly, a software package for developers to create graphical programming environments. All of this is heavily inspired by Scratch and the MIT App inventor. If you are not familiar with those tools and watch to teach kids about programming, you really should check the out. Graphical Programming is a great introduction to programming for kids. and the Pololu 3Pi is a low cost, ready-made robot that kids love. As part part of our mission at Anibit to inspire and educate about robotics, we felt the need to bring the two together... ( Anibit link )
From DARPA: The teams using the DARPA-developed Atlas robot got their first look at the newly upgraded system during a technical shakeout the week of January 12th in Waltham, Mass. The upgraded Atlas is 75 percent new—only the lower legs and feet were carried over from the original design. Lighter materials allowed for inclusion of a battery and a new pump system with only a modest increase in overall weight; the upgraded robot is 6-foot-2 (1.88 meters) and weighs 345 pounds (156.5 kilograms). The most significant changes are to the robot’s power supply and pump. Atlas will now carry an onboard 3.7-kilowatt-hour lithium-ion battery pack, with the potential for one hour of “mixed mission” operation that includes walking, standing, use of tools, and other movements. This will drive a new variable-pressure pump that allows for more efficient operation... ( full details )
From Novice Art Blogger: I'm experiencing Art for the first time, here are my responses. I try my best to decode abstract art using state-of-the-art deep learning algorithms. I sometimes see things hidden in noise. By M P-F (much more)
From Nvidia's CES press conference: The DRIVE PX platform is based on the NVIDIA® Tegra® X1 processor, enabling smarter, more sophisticated advanced driver assistance systems (ADAS) and paving the way for the autonomous car. Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn. DEEP LEARNING COMPUTER VISION Conventional ADAS technology today can detect some objects, do basic classification, alert the driver, and in some cases, stop the vehicle. DRIVE PX takes this to the next level with the ability to differentiate an ambulance from a delivery truck or a parked car from one about to pull into traffic. The system can now inform the driver, not just get their attention with a warning. The car is not just sensing, but interpreting what is taking place around it—an essential capability for auto-piloted driving... ( more info )
From Empire Robotics: The VERSABALL is a squishy balloon membrane full of loose sub-millimeter particles. The soft ball gripper easily conforms around a wide range of target object shapes and sizes. Using a process known as “granular jamming”, air is quickly sucked out of the ball, which vacuum-packs the particles and hardens the gripper around the object to hold and lift it. The object releases when the ball is re-inflated. VERSABALL comes in multiple head shapes and sizes that use the same pneumatic base... ( Empire Robotics' site )
From Yezhou Yang, Yi Li, Cornelia Fermuller and Yiannis Aloimonos: In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. The list of the grasping types. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.... ( article at Kurzweilai.net ) ( original paper )
Records 661 to 670 of 670
Emulate3D software helps you model and test your AMHS solutions rapidly. Use Demo3D to create running models quickly, then generate videos, stills, or view the models in virtual reality at the click of a button. Sim3D enables you to carry out experimental test runs to select optimal solutions and the most robust operating strategy, and Emulate3D Controls Testing is the best way to debug your PLCs offline, and off the project's critical path. Connect to major PLCs, import CAD, and plug into HTC Vive and Oculus Rift to produce awesome models!