A few kinds of visions of mobile robot are algorithmic

  • Time:
  • Click:120
  • source:KAIHER CNC Machining
Speak of mobile robot, everybody the first impression may be service robot, actually nobody drive the car, unmanned aircraft that can fly independently belongs to mobile robot category etc. They can be mixed freedom can fall to walk in specific environment like the person / flies, rely on respective fixed position navigation, method program and avoid the function such as barrier, and visual algorithm is implementation these functional key technologies. If move robot vision algorithm to undertake tearing open solving to moving, you get navigation of object deepness information, fixed position and mural barrier to wait with respect to meeting discovery is to be based on different visual algorithm, the article takes everybody to talk about a few kinds to differ but indispensable visual algorithm is comprised. Article author Chenzi is strong, fasten division of Segway Robot framework and algorithmic controller. The Q of visual algorithm sort of mobile robot: Program of navigation of implementation fixed position, method and avoid barrier, what algorithmic support to need in these processes so? Mention mobile robot, the demand that a lot of people think of may be such: "Hey, you can go the side there I take a cup of heat to take iron. " this listens go up very simple to Everyman task, in the world of the robot, was full of all sorts of challenges however. To finish this job, the robot needs to be record in the map of surroundings above all, exact him location is in the position in the map, undertake method him program and control finish shift according to the map next. And in the process in shift, the robot still needs the three-dimensional deepness information according to spot environment, real time till arrive at final order punctuation,avoid fraise. In this reflection of a chain of robot in the process, can decompose the visual algorithm that is following a few parts: 1. Deepness information is extracted 2. Visual navigation 3. The vision avoids we can say these algorithm in detail at the back of barrier, and these algorithmic foundations, it is the visual sensor on robot head. The foundation of visual algorithm: Sensor Q: On the smartphone photograph the eye that can you serve as a robot like the head? The fundamental in the final analysis of all vision algorithm comes from the visual sensor on robot head, with respect to the eye that is just like a person and nightly eyesight first-rate animal photograph is compared, showing the perceptive ability that come is completely different. Same, the animal of an eye also wants to differ the animal at two eyes to the perceptive ability of the world. The smartphone in the hand photographs everybody to be able to serve as the eye of the robot actually like the head, the Pokeman Go game with very popular instantly used computer vision technology to reach the result of AR. Resemble going up of drawing in that way, group of model resembling a head is photographed in a smartphone, its interior includes a few following important package: Camera lens, IR Filter, CMOS Sensor. Among them camera lens is comprised by several lens commonly, through complex optical design, can use cheap colophony data now, make first-rate like quality mobile phone photograph like the head. Can enclothe above CMOS Sensor be called Bayer trichromatic the colour filter of filter smooth array. The filter of every different color, can pass specific light-wave wavelengh, can be in on corresponding CMOS sensitive parts of an apparatus different position achieves the light intensity of different color respectively. If the resolution of CMOS sensor is 40003000, to get the RGB color picture of similar resolution, the computation that is called Demosaicing with a kind with respect to need is photographed like algorithm, from 2 green 1 blue the RGB information that cipher out 2x2 sees in 1 red 2x2 reseau. General CMOS sensitization is characteristic besides the choice besides color of red green La San, to infra-red light it is transparent. Add IR filter in smooth road accordingly, it is for the infra-red light to CMOS interference in purify sun light. After adding filter, normally the contrast of image can get remarkable promotion. Q: What sensor can you still use in computer vision? Besides RGB camera, the computer returns the special watch for an opportunity that has other sort commonly usedly in the vision. The filter that has a kind of watch for an opportunity for example allows to pass infra-red smooth wave band only. Because person eye is of invisible infra-red light normally, can add active and infra-red light source around camera so, use at the application such as range finding. Additional, the Camera that we use much realizes electronic exposure with the form of Rolling Shutter, in resembling a plan left in that way, to reduce the cost of electronic parts, exposure is group undertakes respectively all right normally, when such certainly will cause an object to move quickly, the image that camera collects can produce deformation. The effect that has calculative vision algorithm to be based on solid geometry to avoid this kind of deformation (for example VSLAM) , the camera that chooses Global Shutter appears particularly important. Deepness camera is the sensor of the need in algorithm of another kind of big vision, can divide a few kinds: 1. TOF sensor (for example Kinect 2 is acting) , similar insect compound eye. Cost is high, can use outdoor. 2. Structural smooth sensor (for example Kinect 1 is acting) , principle of trigonometry fixed position, in cost, cannot use outdoor. 3. Binocular vision (for example Intel Realsense R200) , active illume or passive illume, IR or visible light all but. Cost is low, can use outdoor. Algorithmic one: Deepness information extracts Q: Of the deepness information that how deepness camera identifies an object? In short, its principle uses two parallel watch for an opportunity namely, to the space every medium dot triangle locates. The position that resembles a dot is become in controlling two camera through matching, will calculate the distance of corresponding and three-dimensional dot in the space. Academia is right binocular match restore deepness graph to consider to have very long history, begin to use this technology on NASA spark car. But its taste the market to get feeling sensor begins wide application or the Kinect system from Microsoft in consumptive electron truly. The structural optical technology that Kinect sensor backside used authorization of Israel Primesense company (already was bought by Apple nowadays) . Its principle is escape binocular the complex algorithmic design in matching, turn and photograph change to project actively outwards like the head the infra-red measuring projector of complex facula, and the camera of another parallel position also became infra-red watch for an opportunity, can see all facula with projectile measuring projector clearly. Because the person is less than infra-red facula soon, and grain is very complex, this special be helpful for binocular match algorithm, can use very concise algorithm, identify a deepness information. Although the underlying principle government of Kinect did not give out,explain, be in the article of a Kinect Unleashed is medium in recent years, author to public Hack the working principle of this system: Above all, infra-red image is on base line direction sampling 8 times, can assure to be being done so binocular 3bit comes true after matching inferior precision resembling element. Next, make Sobel filter wave to image, make the match accuracy of image rises. After that, image and the picture of umbriferous facula pattern plate that put beforehand undertake SAD Block Matching. This algorithmic calculation is complex degree small, suit sclerosis and collateral. Finally, through simple image aftertreatment, sampling reachs original resolution below, get ultimate depth pursues. We can see, as Kinect equipment erupted in what consume machine market 2009 (head of put on sale 10 days 1 million) , begin gradually ecbolic the upsurge of research and development that similar technique mutation carries equipment in shift. From 2013 up to now, as the promotion of computational ability and algorithmic progress, the active / passivity with hardware lower cost is binocular deepness camera begins to be on mobile mobile phone emerge in large numbers. What thought very hard real time moves in the past is binocular match algorithm, although be in additional without active structure light below the circumstance, also show very exceedingly good 3D to become like quality. Segway Robot used active / passivity but of switch binocular deepness vision system. Following plan institute are shown, left 3 sensor are respectively, the camera outside Zun Gong, infra-red Pattern is umbriferous, right infra-red watch for an opportunity. When working indoors, because infra-red illuminant is insufficient, infra-red and umbriferous open, auxiliary and binocular match algorithm. When the job outdoor, infra-red light source is adequate, infra-red and umbriferous shut, binocular match algorithm to be able to move directly. Look integratedly, this system indoors outside show exceedingly good deepness to convey feeling ability. Algorithmic 2: Q of fixed position navigation: After the vision is handled, how does the robot realize navigation? Robot navigation itself is a more complex system. The technology that involves among them can be like following table: ? Visual speedometer VO? Propose a plan, use VO and deepness figure? Relocation, identify current position from inside foregone map? Closed circuit detects · , eliminate the closed circuit error of VO? Global navigation? Does the vision avoid barrier? Scene Tagging, the object in identifying a room adds Tag robot to switch on the mobile phone, visual speedometer can begin the work, the record locates from the 6DOF that opens seat number buy to rise information. In robot campaign process, mapping algorithm begins compose to build the world that the robot sees, nod the rich feature in the space information, record of planar map information arrives in robot Map. Move when the robot in the process because of keep out, cut off the power the coordinate that waited for a reason to lose oneself, relocation algorithm locates from inside foregone map with respect to need to the robot current position estimates. Additional, when the position that in the map was being returned in robot motion, once had appeared, the deviation of visual speedometer can bring about orbit and often did not close completely, this needs to closed circuit algorithm detects and correct this error. After having global map, the robot can give punctuation of a few eye to dictate, the own navigation that does overall situation. In reality, because the environment is of ceaseless change, global map can not reflect the fraise condition when navigation completely, because this need overrides at the vision over global navigation,avoid barrier algorithm undertakes real time motion is adjusted. Finally, the information that an automatic navigation system still needs to the robot identifies automatically and understand the different object in the space, position, height and size. These Tag information overlay are on the map, the robot can understand his located environment from semantics, and the user is OK also from make known to lower levels of higher administrative levels a few instructions. Q: What difficulty does visual VSLAM have in the fact on the robot? Visual VSLAM is one assembled visual speedometer, propose a plan, with the algorithmic system of relocation. Development is very rapid in recent years. The visual SLAM that is based on a feature is algorithmic from classical PTAM algorithm germinant, at present with ORB - the algorithm that SLAM is a delegate is already OK real time is achieved to move on PC. The block diagram of an ORBSLAM is below: See from the name, its use ORB to collect a tool as image feature, and be in same share feature all was used follow-uply to nod information in building graph and relocation. Extract algorithm relative to the SIFT at the tradition and SURF feature, its efficiency tower above is very much. ORB - SLAM includes 3 collateral line Cheng, dog namely, build graph and closed circuit. Dog among them line Cheng runs advanced end, make sure real time moves, build graph and Cheng of closed circuit line to move in back end, speed does not need real time, but with dog line Cheng shares data of same share map, OK and online amend make map data precision and dog precision is taller. Next graphs are ORB - the main data structure of SLAM map, nod the cloud and crucial frame. Both between the dot cloud in passing dot of feature of the 2D on image and space establishs map relationship, still maintained the Covisibility Graph relationship between crucial frame at the same time. Pass these data correlation, with optimize a method to safeguard whole map. ORB - SLAM still puts in following difficulty in the application on the robot: 1. Computational amount is too large, can occupy normally on 4 nucleuses processor go resource of CPU of 60 % left and right sides. 2. Can appear when robot movement is too rapid with lose the case that cannot recover from an illness. 3. Only item SLAM is put in the issue that measure inaccuracy decides. When the robot rotates quickly, this problem especially apparent, error of occurrence closed circuit of very quick meeting crosses the situation that cannot redress greatly. Be aimed at measure problem, two kinds of methods solve: Increase to photograph system of SLAM of geminate like head form eye, the visual be used to that perhaps increases an IMU to form loose coupling / to tighten coupling guides location system. The visual be used to that here introduces loose coupling simply guides location system. Regard VSLAM as commonly a black box, in putting its output to to be based on the EKF system of IMU as observation quantity, the output of EKF final Fuse is systematic output namely. Considering Camera data and IMU data is not synchronous normally, because this carries hardware time stand sth on end, the relation of the time prod that needs to judge image data correspondence and IMU time jab. In EKF Propagate measure, the IMU data that taller frame leads updates the condition of EKF ceaselessly. When Camera data comes, spark EKF Update measure, according to EKF the equation that build a model updates matrix of condition variable, covariance, and update the condition variable of all IMU data correspondence that are later than Camera data afresh. The visual be used to that Segway Robot used industry to precede guides location system, it is one runs circuit inside corridor below, return the effect plan after origin, specific be like next advantages: 1. Very small closed circuit error can assure below large scale 2. Real time moves, demand CPU resource is small 3. Allow to rotate quickly wait for case, won't follow lose algorithm 3: Avoid barrier Q: Does the vision avoid the algorithmic principle of barrier is what kind of? The problem that navigation solves is to guide a robot to near an end. When the robot does not have a map, the method that nears an end calls a vision to avoid barrier technology. The problem that avoids barrier algorithm is solved is the data according to visual sensor, to static fraise, dynamic fraise implementation is avoided, but still maintain move to target direction, real time is own navigation. Avoid barrier algorithm has a lot of, however these methods have strict hypothesis, hypothesis fraise is a circle or hypothesis robot is a circle, hypothesis robot is OK and aleatoric direction moves, s or assume it can take circular arc route only. Apply actually however on, the robot is very inaccessible condition. VFF is for instance algorithmic, this algorithmic hypothesis robot is a dot, and OK and aleatoric direction moves. VFH + assumes the robot is a circle, expand through the circle fraise, robot of mere when considering kinematic issue hypothesis moves with circular arc method. DWA also assumes the robot is a circle, when considering kinematic issue imitate forward the circumstance when circular arc moves. Relative to character, we do not restrict the appearance of the robot, when considering kinematic issue, imitate a variety of sports models, and not circular arc of be confined to moves, because can be a robot so,find much better the behavior of escape fraise. This piece of graph showed use different kinematic model to bring about avoid differently barrier result. Zun Tu expresses to use method of the simulative when circular arc model, right graph expresses to use method of simulative of model of another kind of method. In microenvironment of this kind of narrow, this method can forecast the fraise condition of many direction ahead of schedule, the model with right choice can be helped find righter athletic way to avoid fraise. Depend on with the current and commonly used difference that avoids exist between barrier algorithm, it goes to kinematic model abstraction in surroundings map, can use next any avoid commonly usedly barrier is algorithmic, solve Ou so of kinematic model and algorithm bind, and any requirements avoid barrier algorithm can be joined strictly come in. Of Segway Robot avoid barrier system, integrated deepness sensor, ultrasonic, the Sensor such as IMU. In complex environment, can freely avoid fraise. This piece of graph is us avoid a cut of barrier system pursues, can see deepness graph is mixed of 2 dimension avoid barrier map. Most lubricious finger represented next flame to avoid all the time of barrier decision-making. Wonderful interlocution Q: Why choose Ir camera and be why choose Ir camera and traditional Rgb camera? Ir camera comparatively where is the advantage? A: The object that Ir camera can see the person is less than soon, for instance deepness camera need projects indoors infra-red grain, help deepness identifying. The person does not arrive soon, but Ir camera can look. Q: Robot navigation is main now it is Slam technology, still have don't have other navigation technology? What does the Slam technology of main popularity have? What similarities and differences does the visual navigation technology that is used at nobody drive and having man-machine have? A: Slam technology is the module of a foundation in navigation, a lot of more phyletic, have odd eye, binocular, depth, the sensor such as Imu + vision is fundamental algorithm. Binocular camera can suit very well indoor with the environment outdoor. His bulk actually very small, the Camera length that Segway Robot uses controls Q in 10cm: No use exists at the navigation map of robot navigation now, does similar car carry navigation map? What does the map data that is used at robot navigation have? A: Still exist without such robot navigation map now, but it is heat of research and development. For instance Tesla is mixed of the map of Mobileye contend for. Article origin: Know column starts open class forcedly CNC Milling