[2] There are two fundamental configurations of the robot end-effector (hand) and the camera:[4] Visual Servoing control techniques are broadly classified into the following types:[5][6] IBVS was proposed by Weiss and Sanderson.
[7] The control law is based on the error between current and desired features on the image plane, and does not involve any estimate of the pose of the target.
In this case the image features are extracted as well, but are additionally used to estimate 3D information (pose of the object in Cartesian space), hence it is servoing in 3D.
Feddema et al.[13] introduced the idea of generating task trajectory with respect to the feature velocity.
The discussions concentrate on modeling of the interaction matrix, camera, visual features (points, lines, etc..).
Also, mentions the importance of looking into kinematic discrepancy, dynamic effects, repeatability, settling time oscillations and lag in response.
The author tries to address problems like lag and stability, while also talking about feed-forward paths in the control loop.
The paper also, tries to seek justification for trajectory generation, methodology of axis control and development of performance metrics.
The author show that image points alone do not make good features due to the occurrence of singularities.
One main point that the author highlights is the relation between local minima and unrealizable image feature motions.
The only conditions being that the feature points being tracked never leave the field of view and that a depth estimate be predetermined by some off-line technique.
While the secondary task is to mark a fixation point and use it as a reference to bring the camera to the desired pose.
The paper discusses two examples for which depth estimates are obtained from robot odometry and by assuming that all features are on a plane.
The authors highlights the notion that ideal features should be chosen such that the DOF of motion can be decoupled by linear relation.
The authors also introduce an estimate of the target velocity into the interaction matrix to improve tracking performance.
The effect of the choice of image features on the control law is discussed with respect to just the depth axis.
The authors provide a new formulation of the interaction matrix using the velocity of the moments in the image, albeit complicated.
The relation between m˙ij and the velocity screw (v) is given as m˙_ij = L_m_ij v. This technique avoids camera calibration by assuming that the objects are planar and using a depth estimate.
[29][31][32] The major differ- ence being that the authors use a technique similar to,[16] where the task is broken into two (in the case where the features are not parallel to the cam- era plane).
Espiau in [35] showed from purely experimental work that image based visual servoing (IBVS) is robust to calibration errors.
The paper looks at the effect of errors and uncertainty on the terms in the interaction matrix from an experimental approach.
A similar study was done in [36] where the authors carry out experimental evaluation of a few uncalibrated visual servo systems that were popular in the 90’s.
The technique involves determining the error in extracting image position and propagating it to pose estimation and servoing control.
In,[38] the authors extend the work done in [39] by considering global stability in the presence of intrinsic and extrinsic calibration errors.
The main aim of the paper is to determine the upper bound on the positioning error due to image noise using a convex- optimization technique.
The authors conclude the paper with the observation that for unknown target geometry a more accurate depth estimate is required in order to limit the error.