shadows and opengl
(to take a look)
Wednesday, 4 November 2009
Robotic Vision Class
Robotic Vision //
Today I attended to a class on robotic vision here at the University of Plymouth invited by João Paulo Gomes (computer scientist and friend) and learned about different kinds of cameras, some using DMA - Direct Memory Access (as the cell phone cameras). IN the practical classes we will use VGA cameras, and OPENCV. Some topics were: object recognition (size, area, etc.); feature extraction (edges, regions, shapes, etc.); Morphometric measures; features extraction; Hough transform. Coming back to the I-DAT, I google "contrast edges processing 1.0", to find something using Processing, and here it is, just to illustrate what I learned about today - Edge Detection
The professor explains me that to work with hand tracking the finger needs to have not less than 2 pixels in the image. João said to me that if I have a projection of 3000x3000 mm it will be still possible to work with a VGA camera (640x480) if I consider a small fingertip of 5x5 mm (3000/5 = 600). But… if I’m intending to work with a 1280 X 1024 resolution or higher this means that a VGA image will usually appear distorted or blocky. My Logitech Webcam has a 1.3 megapixel sensor (1280x960 is the image size)
Today I attended to a class on robotic vision here at the University of Plymouth invited by João Paulo Gomes (computer scientist and friend) and learned about different kinds of cameras, some using DMA - Direct Memory Access (as the cell phone cameras). IN the practical classes we will use VGA cameras, and OPENCV. Some topics were: object recognition (size, area, etc.); feature extraction (edges, regions, shapes, etc.); Morphometric measures; features extraction; Hough transform. Coming back to the I-DAT, I google "contrast edges processing 1.0", to find something using Processing, and here it is, just to illustrate what I learned about today - Edge Detection
The professor explains me that to work with hand tracking the finger needs to have not less than 2 pixels in the image. João said to me that if I have a projection of 3000x3000 mm it will be still possible to work with a VGA camera (640x480) if I consider a small fingertip of 5x5 mm (3000/5 = 600). But… if I’m intending to work with a 1280 X 1024 resolution or higher this means that a VGA image will usually appear distorted or blocky. My Logitech Webcam has a 1.3 megapixel sensor (1280x960 is the image size)
Monday, 2 November 2009
my master’s thesis //
Today I was reviewing some writings and decide to start reading again my master’s thesis...remembering the times I was part of the Nomads crew… maybe a way to refresh in my mind the reason I'm doing my research in visual arts... maybe...maybe. Well, here it is - my master’s thesis pdf for free download.
Entre e através: complexidade e processos de design em arquitetura
Entre e através: complexidade e processos de design em arquitetura
Sunday, 1 November 2009
FLOB :) and...
Yupiiiiiiiiiiiiiii (!)
flob is a fast flood-fill multi-blob detector that works in processing.
It's a jar library that tracks blobs and simple features in processings' image streams. has been tested in windows and mac in several systems. Aims at fast code. many new algorithms: can track blobs with id's, track feature points...
(It's working uhuuuuuuuuuuuuuu!!!)
I'd google: "processing 1.0 tracker"
video tracker area
The KLT: an algorithm for computer vision that tracks points in a sequence of images
Display blob from MySQL
Fry: Visualizing Data
Smart Laser Scanner
Smart Laser Scanner for Human-Computer Interface
The problem of tracking hands and fingers on natural scenes has received much attention using passive acquisition vision systems and computationally intense image processing. The Smart Laser Scanner is a simple active tracking system using a laser diode (visible or invisible light), steering mirrors, and a single non-imaging photodetector, which is capable of acquiring three dimensional coordinates in real time without the need of any image processing at all. Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning area, on the basis of a real-time analysis of the backscattered signal, to a very narrow window precisely the size of the target.
The problem of tracking hands and fingers on natural scenes has received much attention using passive acquisition vision systems and computationally intense image processing. The Smart Laser Scanner is a simple active tracking system using a laser diode (visible or invisible light), steering mirrors, and a single non-imaging photodetector, which is capable of acquiring three dimensional coordinates in real time without the need of any image processing at all. Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning area, on the basis of a real-time analysis of the backscattered signal, to a very narrow window precisely the size of the target.
gesture recognition
StrokeIt is an advanced mouse gesture recognition engine and command processor. What is a mouse gesture? Mouse gestures are simple symbols that you "draw" on your screen using your mouse. When you perform a mouse gesture that StrokeIt can recognize, it will perform the "action" associated with that gesture. In short, it's a nifty little program that lets you easily control programs by drawing symbols with your mouse.
An example: System B1 utilizes simple computer vision techniques to allow a user to control their system using hand and finger motion. 2-button mouse emulation is provided, and includes a simple gesture-recognition system to allow users instant access to pre-defined functionality.
An example: System B1 utilizes simple computer vision techniques to allow a user to control their system using hand and finger motion. 2-button mouse emulation is provided, and includes a simple gesture-recognition system to allow users instant access to pre-defined functionality.
Hand Tracker // Java
My friend João Gomes, the computer scientist that is my home mate here in Plymouth, send me today this simple program for tracking the movement of someone's hand around an area:
http://icie.cs.byu.edu/cs656/Prog4.html
http://icie.cs.byu.edu/cs656/Prog4.html
Digital Motion
PS3 Eye Webcam, Best Cam for Vision, Augmented Reality
It’s just US$40, and it’s your best ticket to creating your own computer vision and augmented reality projects, imagining stuff before big game console makers do. It’s the Sony PlayStation 3 Eye. Why choose the PS3 Eye over another webcam? Because it was built for CV applications, the camera performs well in variable lighting, has rock-solid, low-latency USB performance, and is capable of high framerates (60-75 fps at normal resolution or even 125-150 fps if you can sacrifice resolution, which might be okay for tracking).
trick-out-your-ps3-eye-webcam
An update for windows: updated-for-windows
There is a library for the Processing programming language that offers video playback, capture and recording capabilities: GSVideo
Processing.org:
tricks for enhancing tracking
motion tracking
It’s just US$40, and it’s your best ticket to creating your own computer vision and augmented reality projects, imagining stuff before big game console makers do. It’s the Sony PlayStation 3 Eye. Why choose the PS3 Eye over another webcam? Because it was built for CV applications, the camera performs well in variable lighting, has rock-solid, low-latency USB performance, and is capable of high framerates (60-75 fps at normal resolution or even 125-150 fps if you can sacrifice resolution, which might be okay for tracking).
trick-out-your-ps3-eye-webcam
An update for windows: updated-for-windows
There is a library for the Processing programming language that offers video playback, capture and recording capabilities: GSVideo
Processing.org:
tricks for enhancing tracking
motion tracking
Subscribe to:
Posts (Atom)