A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is
This paper presents a spectral correlation-based method (SpectGRASP) for robotic grasping of arbitrarily shaped, unknown objects. Given a point cloud of an object, SpectGRASP extracts contact points on the object's surface matching the hand configuration. It neither requires offline training nor a-priori object models. We propose a novel Binary Extended Gaussian Image (BEGI), which represents the point cloud surface normals of both object and robot fingers as signals on a 2-sphere. SphericalarXiv:2107.12492v1 fatcat:jkhjrdo3rfeo5gvaurcmd3y34i
more »... monics are then used to estimate the correlation between fingers and object BEGIs. The resulting spectral correlation density function provides a similarity measure of gripper and object surface normals. This is highly efficient in that it is simultaneously evaluated at all possible finger rotations in SO(3). A set of contact points are then extracted for each finger using rotations with high correlation values. We then use our previous work, Local Contact Moment (LoCoMo) similarity metric, to sequentially rank the generated grasps such that the one with maximum likelihood is executed. We evaluate the performance of SpectGRASP by conducting experiments with a 7-axis robot fitted with a parallel-jaw gripper, in a physics simulation environment. Obtained results indicate that the method not only can grasp individual objects, but also can successfully clear randomly organized groups of objects. The SpectGRASP method also outperforms the closest state-of-the-art method in terms of grasp generation time and grasp-efficiency.
In this paper, we present a novel concept and primary investigations regarding automated unfastening of hexagonal nuts by means of surface exploration with a compliant robot. In contrast to the conventional industrial approaches that rely on custom-designed motorised tools and mechanical tool changers, we propose to use robot fingers to position, grasp and unfasten unknown random-sized hexagonal nuts, which are arbitrarily positioned in the robot's task space. Inspired by how visually impaireddoi:10.3390/robotics10030107 fatcat:4sozi6tkcjbrxltrzkmos5prom
more »... eople handle unknown objects, in this work, we use information observed from surface exploration to devise the unfastening strategy. It combines torque monitoring with active compliance for the robot fingers to smoothly explore the object's surface. We implement a shape estimation technique combining scaled iterative closest point and hypotrochoid approximation to estimate the location as well as contour profile of the hexagonal nut so as to accurately position the gripper fingers. We demonstrate this work in the context of dismantling an electrically driven vehicle battery pack. The experiments are conducted using a seven degrees of freedom (DoF)–compliant robot fitted with a two-finger gripper to unfasten four different sized randomly positioned hexagonal nuts. The obtained results suggest an overall exploration and unfastening success rate of 95% over an average of ten trials for each nut.
In  , Marturi et al. presented an approach to dynamically replan to grasp a moving object based on the vision information from two depth cameras, one hand mounted and the other scene camera. ...arXiv:2107.08149v1 fatcat:h2hrbau2tngd5lrpr4gagkl6k4
The positioning error is computed and compensated by the visual servoing control strategy (Marturi, 2013) . ... The drift observed at high magnifications has been corrected automatically using the method presented by Marturi et al. (2013b) . ...doi:10.1002/sca.21137 pmid:24578204 fatcat:hi3jr5ekujaivefejbbu3c4kqu
tracking is not new, many researchers in the fields of computer vision and robotics have previously proved their robustness and adaptivity in case of solving challenging problems (Fukui et al. 2016; Marturi ... Additionally, there are several possible directions for future research, which include integrating a gaze controller (Marturi et al. 2015) with tracking for hand-eye coordination, improving the switching ... Naresh Marturi is a KTP Robot Vision Scientist with KUKA Robotics UK Ltd. and the University of Birmingham, since 2015. ...doi:10.1007/s10514-018-9799-1 fatcat:zhxisgmkknfpjbqeo2tfc7a5sy
Robots Operating in Hazardous Environments
Effort 50.5 AE 24.43 71.5 AE 16.17 Frustration 6.9 AE 4.33 10.6 AE 4.32 Total work load 294 AE 29.08 359.5 AE 19.63 Influence of audio 57 AE 23.47 - Influence of video 55 AE 20 - Naresh ... Marturi 1,2 *, Alireza Rastegarpanah 1 , Vijaykumar Rajasekaran 1 , Valerio Ortenzi 1 , Yasemin Bekiroglu 1 , Jeffrey Kuo 3 and Rustam Stolkin 1 *Address all correspondence to: firstname.lastname@example.org ...doi:10.5772/intechopen.69739 fatcat:cfxpkjrarzeanbtmetpyp7aavu
This paper deals with the control of a redundant cobot arm to accomplish peg-in-hole insertion tasks in the context of middle ear surgery. It mainly focuses on the development of two shared control laws that combine local measurements provided by position or force sensors with the globally observed visual information. We first investigate the two classical and well-established control modes, i.e., a position-based end-frame tele-operation controller and a comanipulation controller. Based ondoi:10.3389/frobt.2022.824716 pmid:35391943 pmcid:PMC8980232 fatcat:w6535bv4avg75h6hwpddleqg54
more »... e two control architectures, we then propose a combination of visual feedback and position/force-based inputs in the same control scheme. In contrast to the conventional control designs where all degrees of freedom (DoF) are equally controlled, the proposed shared controllers allow teleoperation of linear/translational DoFs while the rotational ones are simultaneously handled by a vision-based controller. Such controllers reduce the task complexity, e.g., a complex peg-in-hole task is simplified for the operator to basic translations in the space where tool orientations are automatically controlled. Various experiments are conducted, using a 7-DoF robot arm equipped with a force/torque sensor and a camera, validating the proposed controllers in the context of simulating a minimally invasive surgical procedure. The obtained results in terms of accuracy, ergonomics and rapidity are discussed in this paper.
In this paper, we present a full scale autofocus approach for scanning electron microscope (SEM). The optimal focus (in-focus) position of the microscope is achieved by maximizing the image sharpness using a vision-based closed-loop control scheme. An iterative optimization algorithm has been designed using the sharpness score derived from image gradient information. The proposed method has been implemented and validated using a tungsten gun SEM at various experimental conditions like varyingdoi:10.1051/matecconf/20153205003 fatcat:frp272arbzaz5nuzfhafyytdoa
more »... ster scan speed, magnification at real-time. We demonstrate that the proposed autofocus technique is accurate, robust and fast.
Polarimetric synthetic aperture radar (PolSAR) image classification is a recent technology with great practical value in the field of remote sensing. However, due to the time-consuming and labor-intensive data collection, there are few labeled datasets available. Furthermore, most available state-of-the-art classification methods heavily suffer from the speckle noise. To solve these problems, in this paper, a novel semi-supervised algorithm based on self-training and superpixels is proposed.doi:10.3390/rs11161933 fatcat:hxen3wfavzh4xlayz3gursggyy
more »... st, the Pauli-RGB image is over-segmented into superpixels to obtain a large number of homogeneous areas. Then, features that can mitigate the effects of the speckle noise are obtained using spatial weighting in the same superpixel. Next, the training set is expanded iteratively utilizing a semi-supervised unlabeled sample selection strategy that elaborately makes use of spatial relations provided by superpixels. In addition, a stacked sparse auto-encoder is self-trained using the expanded training set to obtain classification results. Experiments on two typical PolSAR datasets verified its capability of suppressing the speckle noise and showed excellent classification performance with limited labeled data.
This paper describes the implementation, demonstration and evaluation of a variety of autonomous, visionguided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrarydoi:10.1109/arso.2017.8025192 dblp:conf/arso/RastegarpanahMS17 fatcat:5lpqv24hpzd4dmbbygdgbh2xai
more »... and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks.
Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction anddoi:10.3390/rs12050872 fatcat:wgtmjsyilncthpph5cm43tmi5y
more »... fficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.
Data-driven optimization is an efficient global optimization algorithm for expensive blackbox functions. In this paper, we apply data-driven optimization algorithm to the task of change detection with synthetic aperture radar (SAR) images for the first time. We first propose an easy-to-implement threshold algorithm for change detection in SAR images based on data-driven optimization. Its performance has been compared with commonly used methods like generalized Kittler and Illingworth thresholddoi:10.1109/access.2019.2962622 fatcat:z7zpwwzabbc5zjr3dohk2zylf4
more »... lgorithms (GKIT). Next, we demonstrate how to tune the hyper-parameter of a (previously available) deep belief network (DBN) for change detection using data-driven optimization. Extensive evaluations are carried out using publicly available benchmark datasets. The obtained results suggest comparatively strong performance of our optimized DBN-based change detection algorithm. INDEX TERMS Hyper-parameter optimization, data-driven optimization, change detection, deep belief network (DBN), synthetic aperture radar (SAR) image. 11426 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ VOLUME 8, 2020
Depth estimation for micro-nanomanipulation inside a scanning electron microscope (SEM) is always a major concern. So far in the literature, various methods have been proposed based on stereoscopic imaging. Most of them require external hardware unit or manual interaction during the process. In this paper, solely relying on image sharpness information, we present a new technique to estimate the depth in real-time. To improve the accuracy as well as the rapidity of the method, we consider thatdoi:10.1109/tim.2016.2556898 fatcat:epfry4hb2vb6tkcg4etys7tepq
more »... th autofocus and depth estimation as visual servoing paradigms. The major flexibility of the method lies in its ability to compute the focus position and the depth using only the acquired image information i.e., sharpness. The feasibility of the method is shown by performing various ground truth experiments: autofocus achievements, depth estimation, focus-based nanomanipulator depth control and sample topographic estimation at different scenarios inside the vacuum chamber of a tungsten gun SEM. The obtained results demonstrate the accuracy, rapidity and efficiency of the developed method.
Naresh Marturi, Brahim Tamadazte, Sounkalo Dembélé, and Nadine Piat are with Automatic control and Micro Mechatronic Systems (AS2M) department, Institute FEMTO-ST, ...doi:10.1109/icra.2014.6906973 dblp:conf/icra/MarturiTDP14 fatcat:tu5kb57a5vg3rmm4xktt6ejmj4
« Previous Showing results 1 — 15 out of 29 results