Imagers based on focal plane arrays (FPA) risk introducing in-band and out-of-band spurious response, or aliasing, due to undersampling. This can make high-level discrimination tasks such as recognition and identification much more difficult. To overcome this problem, three-chip color charge coupled device (CCD) cameras typically offset one CCD by 1/2 pixel with respect to the other two. Analogously, monochrome imagers including infrared can use microscan (or dither) to reduce aliasing. This...
Synopsis: Homography transform in Fourier spectrum with application to object recognition. Ideally, recognition of objects should be projection, scale, translation and rotation invariant, just as they are in human vision. This, however, is a very complex problem, since numerous times an object is occluded and many objects rarely appear the same twice, due to different camera/observer positions, variable lighting or object motion. Our goal in this regard is to investigate autonomous object recognition in unconstrained environments by means of outlines of the objects, which we will refer to as the contours. One of the reasons for the popularity of contour-based analysis techniques is that edge detection constitutes an important aspect of shape recognition by the human visual system. The main motivation behind this work is that 2-D homography may overcome the problem of noise sensitivity and boundary variations.
Methods for super-resolution (SR) can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution.
Image alignment is the process of matching one image called template (let's denote it as T) with another image, I (see the above figure). There are many applications for image alignment, such as tracking objects on video, motion analysis, and many other tasks of computer vision. In 1981, Bruse D. Lucas and Takeo Kanade proposed a new technique that used image intensity gradient information to search for the best match between a template T and another image I. The proposed algorithm has been widely used in the field of computer vision for the last 20 years, and has had many modifications and extensions. One of such modifications is an algorithm proposed by Simon Baker, Frank Dellaert, and Iain Matthews. Their algorithm is much more computationally effective than the original Lucas-Kanade algorithm.
ITK is a powerful open-source toolkit implementing state-of-the-art algorithms in medical image processing and analysis. MATLAB, on the other hand, is well-known for its easy-to-use, powerful prototyping capabilities that significantly improve productivity. With the help of MATITK, biomedical image computing researchers familiar with MATLAB can harness the power of ITK algorithms while avoiding learning C++ and dealing with low-level programming issues.
This is a release of a Camera Calibration Toolbox for Matlab® with a complete documentation. This document may also be used as a tutorial on camera calibration since it includes general information about calibration, references and related links.
Opticks is an expandable remote sensing and imagery analysis software platform that is free and open source. If you've used other commercial tools like: ERDAS IMAGINE, RemoteView, ENVI, or SOCET GXP, then you need to give Opticks a try. Unlike other competing tools, you can add capability to Opticks by creating an extension. Opticks provides the most advanced extension capability of any other remote sensing tool on the market.
def fitToWindow(self): """ Fits the image to the scroll area's size. """ sizeImage = self.lblImage.pixmap().size() height, width = sizeImage.height(), sizeImage.width() # If its smaller than self size, let it be normal size if height<self.size().height() and width<self.size().width(): self.normalSize() else: sizeImage.scale(self.size()*0.98, Qt.KeepAspectRatio) # Adjust the scale self.scale = float(sizeImage.height()) / height # Resize image to 95% self's size self.lblImage.resize(sizeImage) # Toggle windowFit (For resize events) self.windowFit = True
The NASA Vision Workbench (VW) is a modular, extensible, cross-platform computer vision software framework written in C++. It was designed to support a variety of space exploration tasks, including automated science and engineering analysis, robot perception, and 2D/3D environment reconstruction, though it can also serve as a general-purpose image processing and machine vision framework in other contexts as well. The VW was developed within the Autonomous Systems and Robotics area of the Inteligent Systems Division at NASA's Ames Research Center.
The Large Synoptic Survey Telescope (LSST) is a project to build an 8.4m telescope at Cerro Pachon, Chile and survey the entire sky every three days starting around 2014. The scientific goals of the project range from characterizing the population of largish asteroids which are in orbits that could hit the Earth to understanding the nature of the dark energy that is causing the Universe's expansion to accelerate. The application codes, which handle the images coming from the telescope and generate catalogs of astronomical sources, are being implemented in C++, exported to python using swig. The pipeline processing framework allows these python modules to be connected together to process data in a parallel environment.