29 Hits in 0.98 sec

SpectGRASP: Robotic Grasping by Spectral Correlation [article]

Maxime Adjigble, Cristiana de Farias, Rustam Stolkin, Naresh Marturi
2021 arXiv   pre-print
This paper presents a spectral correlation-based method (SpectGRASP) for robotic grasping of arbitrarily shaped, unknown objects. Given a point cloud of an object, SpectGRASP extracts contact points on the object's surface matching the hand configuration. It neither requires offline training nor a-priori object models. We propose a novel Binary Extended Gaussian Image (BEGI), which represents the point cloud surface normals of both object and robot fingers as signals on a 2-sphere. Spherical
more » ... monics are then used to estimate the correlation between fingers and object BEGIs. The resulting spectral correlation density function provides a similarity measure of gripper and object surface normals. This is highly efficient in that it is simultaneously evaluated at all possible finger rotations in SO(3). A set of contact points are then extracted for each finger using rotations with high correlation values. We then use our previous work, Local Contact Moment (LoCoMo) similarity metric, to sequentially rank the generated grasps such that the one with maximum likelihood is executed. We evaluate the performance of SpectGRASP by conducting experiments with a 7-axis robot fitted with a parallel-jaw gripper, in a physics simulation environment. Obtained results indicate that the method not only can grasp individual objects, but also can successfully clear randomly organized groups of objects. The SpectGRASP method also outperforms the closest state-of-the-art method in terms of grasp generation time and grasp-efficiency.
arXiv:2107.12492v1 fatcat:jkhjrdo3rfeo5gvaurcmd3y34i

Nut Unfastening by Robotic Surface Exploration

Alireza Rastegarpanah, Rohit Ner, Rustam Stolkin, Naresh Marturi
2021 Robotics  
In this paper, we present a novel concept and primary investigations regarding automated unfastening of hexagonal nuts by means of surface exploration with a compliant robot. In contrast to the conventional industrial approaches that rely on custom-designed motorised tools and mechanical tool changers, we propose to use robot fingers to position, grasp and unfasten unknown random-sized hexagonal nuts, which are arbitrarily positioned in the robot's task space. Inspired by how visually impaired
more » ... eople handle unknown objects, in this work, we use information observed from surface exploration to devise the unfastening strategy. It combines torque monitoring with active compliance for the robot fingers to smoothly explore the object's surface. We implement a shape estimation technique combining scaled iterative closest point and hypotrochoid approximation to estimate the location as well as contour profile of the hexagonal nut so as to accurately position the gripper fingers. We demonstrate this work in the context of dismantling an electrically driven vehicle battery pack. The experiments are conducted using a seven degrees of freedom (DoF)–compliant robot fitted with a two-finger gripper to unfasten four different sized randomly positioned hexagonal nuts. The obtained results suggest an overall exploration and unfastening success rate of 95% over an average of ten trials for each nut.
doi:10.3390/robotics10030107 fatcat:4sozi6tkcjbrxltrzkmos5prom

Dual Quaternion-Based Visual Servoing for Grasping Moving Objects [article]

Cristiana de Farias, Maxime Adjigble, Brahim Tamadazte, Rustam Stolkin, Naresh Marturi
2021 arXiv   pre-print
In [7] , Marturi et al. presented an approach to dynamically replan to grasp a moving object based on the vision information from two depth cameras, one hand mounted and the other scene camera.  ... 
arXiv:2107.08149v1 fatcat:h2hrbau2tngd5lrpr4gagkl6k4

Scanning electron microscope image signal-to-noise ratio monitoring for micro-nanomanipulation

Naresh Marturi, Sounkalo Dembélé, Nadine Piat
2014 Scanning  
The positioning error is computed and compensated by the visual servoing control strategy (Marturi, 2013) .  ...  The drift observed at high magnifications has been corrected automatically using the method presented by Marturi et al. (2013b) .  ... 
doi:10.1002/sca.21137 pmid:24578204 fatcat:hi3jr5ekujaivefejbbu3c4kqu

Dynamic grasp and trajectory planning for moving objects

Naresh Marturi, Marek Kopicki, Alireza Rastegarpanah, Vijaykumar Rajasekaran, Maxime Adjigble, Rustam Stolkin, Aleš Leonardis, Yasemin Bekiroglu
2018 Autonomous Robots  
tracking is not new, many researchers in the fields of computer vision and robotics have previously proved their robustness and adaptivity in case of solving challenging problems (Fukui et al. 2016; Marturi  ...  Additionally, there are several possible directions for future research, which include integrating a gaze controller (Marturi et al. 2015) with tracking for hand-eye coordination, improving the switching  ...  Naresh Marturi is a KTP Robot Vision Scientist with KUKA Robotics UK Ltd. and the University of Birmingham, since 2015.  ... 
doi:10.1007/s10514-018-9799-1 fatcat:zhxisgmkknfpjbqeo2tfc7a5sy

Towards Advanced Robotic Manipulations for Nuclear Decommissioning [chapter]

Naresh Marturi, Alireza Rastegarpanah, Vijaykumar Rajasekaran, Valerio Ortenzi, Yasemin Bekiroglu, Jeffrey Kuo, Rustam Stolkin
2017 Robots Operating in Hazardous Environments  
Effort 50.5 AE 24.43 71.5 AE 16.17 Frustration 6.9 AE 4.33 10.6 AE 4.32 Total work load 294 AE 29.08 359.5 AE 19.63 Influence of audio 57 AE 23.47 - Influence of video 55 AE 20 - Naresh  ...  Marturi 1,2 *, Alireza Rastegarpanah 1 , Vijaykumar Rajasekaran 1 , Valerio Ortenzi 1 , Yasemin Bekiroglu 1 , Jeffrey Kuo 3 and Rustam Stolkin 1 *Address all correspondence to:  ... 
doi:10.5772/intechopen.69739 fatcat:cfxpkjrarzeanbtmetpyp7aavu

Shared Control Schemes for Middle Ear Surgery

Jae-Hun So, Stéphane Sobucki, Jérôme Szewczyk, Naresh Marturi, Brahim Tamadazte
2022 Frontiers in Robotics and AI  
This paper deals with the control of a redundant cobot arm to accomplish peg-in-hole insertion tasks in the context of middle ear surgery. It mainly focuses on the development of two shared control laws that combine local measurements provided by position or force sensors with the globally observed visual information. We first investigate the two classical and well-established control modes, i.e., a position-based end-frame tele-operation controller and a comanipulation controller. Based on
more » ... e two control architectures, we then propose a combination of visual feedback and position/force-based inputs in the same control scheme. In contrast to the conventional control designs where all degrees of freedom (DoF) are equally controlled, the proposed shared controllers allow teleoperation of linear/translational DoFs while the rotational ones are simultaneously handled by a vision-based controller. Such controllers reduce the task complexity, e.g., a complex peg-in-hole task is simplified for the operator to basic translations in the space where tool orientations are automatically controlled. Various experiments are conducted, using a 7-DoF robot arm equipped with a force/torque sensor and a camera, validating the proposed controllers in the context of simulating a minimally invasive surgical procedure. The obtained results in terms of accuracy, ergonomics and rapidity are discussed in this paper.
doi:10.3389/frobt.2022.824716 pmid:35391943 pmcid:PMC8980232 fatcat:w6535bv4avg75h6hwpddleqg54

Closed-Loop Autofocus Scheme for Scanning Electron Microscope

Le Cui, Naresh Marturi, Eric Marchand, Sounkalo Dembélé, Nadine Piat, Y. Bellouard
2015 MATEC Web of Conferences  
In this paper, we present a full scale autofocus approach for scanning electron microscope (SEM). The optimal focus (in-focus) position of the microscope is achieved by maximizing the image sharpness using a vision-based closed-loop control scheme. An iterative optimization algorithm has been designed using the sharpness score derived from image gradient information. The proposed method has been implemented and validated using a tungsten gun SEM at various experimental conditions like varying
more » ... ster scan speed, magnification at real-time. We demonstrate that the proposed autofocus technique is accurate, robust and fast.
doi:10.1051/matecconf/20153205003 fatcat:frp272arbzaz5nuzfhafyytdoa

Semi-Supervised PolSAR Image Classification Based on Self-Training and Superpixels

Yangyang Li, Ruoting Xing, Licheng Jiao, Yanqiao Chen, Yingte Chai, Naresh Marturi, Ronghua Shang
2019 Remote Sensing  
Polarimetric synthetic aperture radar (PolSAR) image classification is a recent technology with great practical value in the field of remote sensing. However, due to the time-consuming and labor-intensive data collection, there are few labeled datasets available. Furthermore, most available state-of-the-art classification methods heavily suffer from the speckle noise. To solve these problems, in this paper, a novel semi-supervised algorithm based on self-training and superpixels is proposed.
more » ... st, the Pauli-RGB image is over-segmented into superpixels to obtain a large number of homogeneous areas. Then, features that can mitigate the effects of the speckle noise are obtained using spatial weighting in the same superpixel. Next, the training set is expanded iteratively utilizing a semi-supervised unlabeled sample selection strategy that elaborately makes use of spatial relations provided by superpixels. In addition, a stacked sparse auto-encoder is self-trained using the expanded training set to obtain classification results. Experiments on two typical PolSAR datasets verified its capability of suppressing the speckle noise and showed excellent classification performance with limited labeled data.
doi:10.3390/rs11161933 fatcat:hxen3wfavzh4xlayz3gursggyy

Vision-based framework to estimate robot configuration and kinematic constraints

Valerio Ortenzi, Naresh Marturi, Michael Mistry, Jeffrey A. Kuo, Rustam Stolkin
2018 IEEE/ASME transactions on mechatronics  
doi:10.1109/tmech.2018.2865758 fatcat:txgalrr4afhe3l2mwt2drbnibm

Autonomous vision-guided bi-manual grasping and manipulation

Alireza Rastegarpanah, Naresh Marturi, Rustam Stolkin
2017 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)  
This paper describes the implementation, demonstration and evaluation of a variety of autonomous, visionguided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary
more » ... and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks.
doi:10.1109/arso.2017.8025192 dblp:conf/arso/RastegarpanahMS17 fatcat:5lpqv24hpzd4dmbbygdgbh2xai

Multi-scale Adaptive Feature Fusion Network for Semantic Segmentation in Remote Sensing Images

Ronghua Shang, Jiyu Zhang, Licheng Jiao, Yangyang Li, Naresh Marturi, Rustam Stolkin
2020 Remote Sensing  
Semantic segmentation of high-resolution remote sensing images is highly challenging due to the presence of a complicated background, irregular target shapes, and similarities in the appearance of multiple target categories. Most of the existing segmentation methods that rely only on simple fusion of the extracted multi-scale features often fail to provide satisfactory results when there is a large difference in the target sizes. Handling this problem through multi-scale context extraction and
more » ... fficient fusion of multi-scale features, in this paper we present an end-to-end multi-scale adaptive feature fusion network (MANet) for semantic segmentation in remote sensing images. It is a coding and decoding structure that includes a multi-scale context extraction module (MCM) and an adaptive fusion module (AFM). The MCM employs two layers of atrous convolutions with different dilatation rates and global average pooling to extract context information at multiple scales in parallel. MANet embeds the channel attention mechanism to fuse semantic features. The high- and low-level semantic information are concatenated to generate global features via global average pooling. These global features are used as channel weights to acquire adaptive weight information of each channel by the fully connected layer. To accomplish an efficient fusion, these tuned weights are applied to the fused features. Performance of the proposed method has been evaluated by comparing it with six other state-of-the-art networks: fully convolutional networks (FCN), U-net, UZ1, Light-weight RefineNet, DeepLabv3+, and APPD. Experiments performed using the publicly available Potsdam and Vaihingen datasets show that the proposed MANet significantly outperforms the other existing networks, with overall accuracy reaching 89.4% and 88.2%, respectively and with average of F1 reaching 90.4% and 86.7% respectively.
doi:10.3390/rs12050872 fatcat:wgtmjsyilncthpph5cm43tmi5y

Application of Data Driven Optimization for Change Detection in Synthetic Aperture Radar Images

Yangyang Li, Guangyuan Liu, Tiantian Li, Licheng Jiao, Gao Lu, Naresh Marturi
2019 IEEE Access  
Data-driven optimization is an efficient global optimization algorithm for expensive blackbox functions. In this paper, we apply data-driven optimization algorithm to the task of change detection with synthetic aperture radar (SAR) images for the first time. We first propose an easy-to-implement threshold algorithm for change detection in SAR images based on data-driven optimization. Its performance has been compared with commonly used methods like generalized Kittler and Illingworth threshold
more » ... lgorithms (GKIT). Next, we demonstrate how to tune the hyper-parameter of a (previously available) deep belief network (DBN) for change detection using data-driven optimization. Extensive evaluations are carried out using publicly available benchmark datasets. The obtained results suggest comparatively strong performance of our optimized DBN-based change detection algorithm. INDEX TERMS Hyper-parameter optimization, data-driven optimization, change detection, deep belief network (DBN), synthetic aperture radar (SAR) image. 11426 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see VOLUME 8, 2020
doi:10.1109/access.2019.2962622 fatcat:z7zpwwzabbc5zjr3dohk2zylf4

Visual Servoing-Based Depth-Estimation Technique for Manipulation Inside SEM

Naresh Marturi, Brahim Tamadazte, Sounkalo Dembele, Nadine Piat
2016 IEEE Transactions on Instrumentation and Measurement  
Depth estimation for micro-nanomanipulation inside a scanning electron microscope (SEM) is always a major concern. So far in the literature, various methods have been proposed based on stereoscopic imaging. Most of them require external hardware unit or manual interaction during the process. In this paper, solely relying on image sharpness information, we present a new technique to estimate the depth in real-time. To improve the accuracy as well as the rapidity of the method, we consider that
more » ... th autofocus and depth estimation as visual servoing paradigms. The major flexibility of the method lies in its ability to compute the focus position and the depth using only the acquired image information i.e., sharpness. The feasibility of the method is shown by performing various ground truth experiments: autofocus achievements, depth estimation, focus-based nanomanipulator depth control and sample topographic estimation at different scenarios inside the vacuum chamber of a tungsten gun SEM. The obtained results demonstrate the accuracy, rapidity and efficiency of the developed method.
doi:10.1109/tim.2016.2556898 fatcat:epfry4hb2vb6tkcg4etys7tepq

Visual servoing schemes for automatic nanopositioning under scanning electron microscope

Naresh Marturi, Brahim Tamadazte, Sounkalo Dembele, Nadine Piat
2014 2014 IEEE International Conference on Robotics and Automation (ICRA)  
Naresh Marturi, Brahim Tamadazte, Sounkalo Dembélé, and Nadine Piat are with Automatic control and Micro Mechatronic Systems (AS2M) department, Institute FEMTO-ST,  ... 
doi:10.1109/icra.2014.6906973 dblp:conf/icra/MarturiTDP14 fatcat:tu5kb57a5vg3rmm4xktt6ejmj4
« Previous Showing results 1 — 15 out of 29 results