Autonomous underwater robots, situational aware autonomous ships, drones, ocean observing small student-build satellites and drones, warehouse robots, and humanoids, are all powered by the continuous and relentlessly improvement in computational power, tremendous advances in low-cost sensors and sensor systems, as well as enabling technologies, such as computer vision. Reproducing visual sensing and perception capabilities for such robotic systems provide a very powerful and highly desired tool. Ideally, this enables a robot to perceive and interpret its surrounding so that it can use this information to execute different tasks within a real-world environment. As robots in general operate in various environments equipped with different sets of visual sensing devices this makes the generic “interpretation of the world” very challenging. In this presentation I wish to introduce certain aspects within the world of “robotic vision” - where we try to teach robots to understand, plan, learn and act in an intelligent way.