Learning UI Navigation through Demonstrations composed of Macro Actions [article]

Wei Li
2021 arXiv   pre-print
We have developed a framework to reliably build agents capable of UI navigation. The state space is simplified from raw-pixels to a set of UI elements extracted from screen understanding, such as OCR and icon detection. The action space is restricted to the UI elements plus a few global actions. Actions can be customized for tasks and each action is a sequence of basic operations conditioned on status checks. With such a design, we are able to train DQfD and BC agents with a small number of
more » ... nstration episodes. We propose demo augmentation that significantly reduces the required number of human demonstrations. We made a customization of DQfD to allow demos collected on screenshots to facilitate the demo coverage of rare cases. Demos are only collected for the failed cases during the evaluation of the previous version of the agent. With 10s of iterations looping over evaluation, demo collection, and training, the agent reaches a 98.7\% success rate on the search task in an environment of 80+ apps and websites where initial states and viewing parameters are randomized.
arXiv:2110.08653v1 fatcat:kz7iihtzkvbu3pbaktujmxe5ia