Microsoft HoloLens gets real with robotics, surgery, architecture

Jared Newman for PCWorld:  At the 2015 Build conference, Microsoft tried to prove that HoloLens is more than just a neat gimmick.   The company showed off several new demos for its “mixed reality” headset, which can map digital imagery onto the user’s physical surroundings. While previous demos had focused on fun ideas like a virtual Mars walk and a living room-sized version of Minecraft, the Build presentation emphasized real-world applications for businesses and education.   For instance, Microsoft showed how architects could use HoloLens to interact with 3D models, laid out virtually in front of them on a table. They might also be able to examine aspects of a building site at full scale, with virtual beams and walls rendered before their eyes.   Not all the presentations were so serious. Microsoft also showed off an actual robot whose controls appeared in the virtual space above the robot’s head. Users could then create a movement pattern for the robot by tapping on the ground. Another demo showed how users could create their own personal screens that followed them around in real space.

Festo BionicANTs and eMotionButterflies

From Festo Bionic: With the bionic butterflies, for the first time Festo combines the ultralight construction of artificial insects with collision-free flying behaviour in a collective. For coordination purposes, the eMotionButterflies make use of a guidance and monitoring system, which could be used in the networked factory of the future... ( additional info ) Like their natural role models, the BionicANTs work together under clear rules. They communicate with each other and coordinate their actions and movements among each other. The artificial ants thus demonstrate how autonomous individual components can solve a complex task together working as an overall networked system... ( additional info )

What You Wanted To Know About AI

Run down of the state of AI from FastML: Let’s take a look at how advanced we are, really. Two representative and well known examples of the current state of the art are: Automatic image annotation using a combination of convolutional and recurrent neural networks DeepMind’s deep reinforcement learning for playing Atari games ( cont'd at FastML )

Runt Rover Robot Kits from Actobotics

From Servocity:  Simply mount your electronics using our innovative multi-board mounts that are compatible with a variety of micro-controllers; such as Raspberry Pi, Arduino and the SparkFun Redboard. The Runt Rovers™ are perfect for beginning light programming and educational applications... ( Servocity available options )

Robotnik's Mobile Manipulator RB-1

From Smashing Robotics: Spanish company Robotnik introduced earlier this week their very own RB-1 mobile manipulator. The robot is designed for indoor use in household as well as professional environments, and is brought to life by using well known Dynamixel Pro series servo actuators which add up to 13 degrees of freedom (DOF), depending on variant. It is well suited for remote manipulation or human assistance applications and can be fully autonomous or manually controlled... ( full article ) ( datasheet )

Beating Super Hexagon With Computer Vision

Two great examples of using Computer Vision to beat Super Hexagon. Super Hexagon is a really hard game. The goal of Super Hexagon is to control a small triangle which circles around a central hexagon (which occasionally collapses into a pentagon or square in the hexagon and hyper hexagon difficulty) attempting to avoid contact with incoming "walls".  First example from Valentin Trimaille's Super Hexagon bot: Ray Casting Wall Detection The point is that a bot for this game makes a really nice image processing project to start learning OpenCV: simple shapes but lots of human disturbing effects, fast-paced game meaning real-time is required, very simple controls: rotate CW or CCW... ( full article ) Second example from Shaun LeBron's  Super Hexagon Unwrapper: This project is written in Python. It employs Computer Vision algorithms provided by SimpleCV to establish a reference frame in the image. Then it warps (or "unwraps") the image based on that reference frame, using OpenGL fragment shaders... ( github code ) ( full explanation )

Proposed FAA Commercial UAS Rules

From IEEE Spectrum: Unmanned aircraft must weigh less than 55 lbs. (25 kg). Visual line-of-sight (VLOS) only; the unmanned aircraft must remain within VLOS of the operator or visual observer. At all times the small unmanned aircraft must remain close enough to the operator for the operator to be capable of seeing the aircraft with vision unaided by any device other than corrective lenses. Small unmanned aircraft may not operate over any persons not directly involved in the operation. Daylight-only operations (official sunrise to official sunset, local time). Must yield right-of-way to other aircraft, manned or unmanned. May use visual observer (VO) but not required. First-person view camera cannot satisfy “see-and-avoid” requirement but can be used as long as requirement is satisfied in other ways. Maximum airspeed of 100 mph (87 knots). Maximum altitude of 500 feet above ground level... (full article)

Records 256 to 262 of 262

First | Previous

Design & Development - Featured Product

Emulate3D Engineering Software Creates Your Advantage

Emulate3D Engineering Software Creates Your Advantage

Emulate3D software helps you model and test your AMHS solutions rapidly. Use Demo3D to create running models quickly, then generate videos, stills, or view the models in virtual reality at the click of a button. Sim3D enables you to carry out experimental test runs to select optimal solutions and the most robust operating strategy, and Emulate3D Controls Testing is the best way to debug your PLCs offline, and off the project's critical path. Connect to major PLCs, import CAD, and plug into HTC Vive and Oculus Rift to produce awesome models!