Intelligent Robot Lab
Brown University, Providence RI

Publications

You can filter the publications by keyword, listed in reverse chronological order (most recent first).

  
2025
  1. A. Ivanov, A. Bagaria, and G.D. Konidaris. Discovering Options that Minimize Average Planning Time. Accepted, Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence, February 2025.

2024
  1. C. Allen, A.T. Kirtland, R.Y. Tao, S. Lobel, D. Scott, N. Petrocelli, O. Gottesman, R. Parr, M. Littman, and G.D. Konidaris. Resolving Partial Observability in Decision Processes via the Lambda Discrepancy. In Advances in Neural Information Processing Systems 37, December 2024. [Video] [Page] [Code]

  2. Q. Zhao, H. Fu, C. Sun, and G.D. Konidaris. EPO: Hierarchical LLM Agents with Environment Preference Optimization. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, November 2024.

  3. X. Liu and A. Shah and G.D. Konidaris and S. Tellex and D. Paulius. Lang2LTL-2: Grounding Spatio-temporal Navigation Commands Using Large Language and Vision Models. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2024.

  4. R. Rodriguez-Sanchez and G.D. Konidaris. Learning Abstract World Models for Value-Preserving Planning with Options. Reinforcement Learning Journal 1(1), August 2024.

  5. S. Lobel and R. Parr. An Optimal Tightness Bound for the Simulation Lemma. Reinforcement Learning Journal 1(1), August 2024.

  6. R. Zhang, H. Fu, Y. Miao, and G.D. Konidaris. Model-based Reinforcement Learning for Parameterized Action Spaces. In Proceedings of the Forty-first International Conference on Machine Learning, July 2024.

  7. H. Fu, P. Sharma, E. Stengel-Eskin, G.D. Konidaris, N. Le Roux, M-A Côté, and X. Yuan. Language-Guided Skill Learning with Temporal Variational Inference. In Proceedings of the Forty-first International Conference on Machine Learning, July 2024.

  8. M. Merlin, S. Parr, N. Parikh, S. Orozco, V. Gupta, E. Rosen, and G.D. Konidaris. Robot Task Planning Under Local Observability. In Proceedings of the 2024 IEEE Conference on Robotics and Automation, May 2024. [Video]

  9. B. Abbatematteo, E. Rosen, S. Thompson, M.T. Akbulut, S. Rammohan, and G.D. Konidaris. Composable Interaction Primitives: A Structured Policy Class for Efficiently Learning Sustained-Contact Manipulation Skills. In Proceedings of the 2024 IEEE Conference on Robotics and Automation, May 2024. [Video]

  10. X. Liu, A.J. Shah, E. Rosen, M. Jia, G.D. Konidaris, and S. Tellex. LTL-Transfer: Skill Transfer for Temporal Task Specification. Accepted, Proceedings of the 2024 IEEE Conference on Robotics and Automation, May 2024. [Video]

  11. E. Rosen. Abstraction for Autonomous Human-Robot Interaction. PhD Thesis, Brown University, February 2024.

2023
  1. A. Bagaria, B.M. Abbatematteo, O. Gottesman, M. Corsaro, S. Rammohan, and G.D. Konidaris, Effectively Learning Initiation Sets in Hierarchical Reinforcement Learning. In Neural Information Processing Systems 37, December 2023.

  2. B. Quartey, A. Shah, and G.D. Konidaris. Exploiting Contextual Structure to Generate Useful Auxiliary Tasks. In the NeurIPS 2023 Workshop on Generalization in Planning, December 2023.

  3. R. Rodriguez-Sanchez, and G. Konidaris. Learning Abstract World Models for Value-preserving Planning with Options. In the NeurIPS 2023 Workshop on Generalization in Planning, December 2023.

  4. O. Biza, S. Thompson, K.R. Pagidi, A. Kumar, E. van der Pol, R. Walters, T. Kipf, J-W. van de Meent, L.L.S. Wong, and R. Platt. One-shot Imitation Learning via Interaction Warping. In Proceedings of the 7th Annual Conference on Robot Learning, November 2023.

  5. E. Rosen, S. James, S. Orozco, V. Gupta, M. Merlin, S. Tellex, and G.D. Konidaris. Synthesizing Navigation Abstractions for Planning with Portable Manipulation Skills. In Proceedings of the 7th Annual Conference on Robot Learning, November 2023.

  6. B.M. Abbatematteo. Exploiting Structure for Efficient Robotic Manipulation. PhD Thesis, Brown University, October 2023.

  7. C.S. Allen. Structured Abstractions for General-Purpose Decision Making. PhD Thesis, Brown University, October 2023.

  8. R. Ma, L. Lam, B.A. Spiegel, A. Ganeshan, R. Patel, B. Abbatematteo, D.P. Paulius, S. Tellex, and G.D. Konidaris. Skill Generalization With Verbs. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2023.

  9. S. Shaw, D. Jha, A. Raghunathan, R.I. Corcodel, D. Romeres, G.D. Konidaris, and D. Nikovski. Constrained Dynamic Movement Primitives for Collision Avoidance in Novel Environments. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2023.

  10. I. Idrees, T. Yun, N. Sharma, N. Gopalan, S. Tellex, and G.D. Konidaris. Improved Inference of Human Intent by Combining Plan Recognition and Language Feedback. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2023.

  11. D. Paulius, A. Agostini, and D. Lee. Long-Horizon Planning and Execution With Functional Object-Oriented Networks. In IEEE Robotics and Automation Letters 8(8), 4513-4520, August 2023.

  12. A. Bagaria, R. Jiang, R. Kumar, and T. Schaul. Scaling Goal-based Exploration via Pruning Proto-goals. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence, August 2023.

  13. M. Fishman, N. Kumar, C. Allen, N. Danas, M. Littman, S. Tellex, and G. Konidaris. Task Scoping: Generating Task-Specific Simplifications of Open-Scope Planning Problems. In the IJCAI Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning, August 2023.

  14. R. Rodriguez-Sanchez, B.A. Spiegel, J. Wang, R. Patel, G.D. Konidaris, and S. Tellex. RLang: A Declarative Language for Describing Partial World Knowledge to Reinforcement Learning Agents. In Proceedings of the Fortieth International Conference on Machine Learning, July 2023.

  15. S. Lobel, A. Bagaria, and G.D. Konidaris. Flipping Coins to Estimate Pseudocounts for Exploration in Reinforcement Learning. In Proceedings of the Fortieth International Conference on Machine Learning, July 2023.

  16. H. Fu, S. Yu, S. Tiwari, M.L. Littman, and G.D. Konidaris. Meta-Learning Parameterized Skills. In Proceedings of the Fortieth International Conference on Machine Learning, July 2023.

  17. B.A. Spiegel and G.D. Konidaris. Guided Policy Search for Parameterized Skills using Adverbs. In The ICML 2023 Workshop on Interactive Learning with Implicit Human Feedback, July 2023.

  18. B. Abbatematteo, C. Robbins, K. Sherry, J. Mokkamakkul, E. Rosen, S. Thompson, M. Stein, and G.D. Konidaris. Sensorized Manipulation Challenge Kit for Benchmarking Robotic Manipulation. In the RSS 2023 Workshop on Experiment-oriented Locomotion and Manipulation Research, July 2023. [instructions and code]

  19. M.S. Corsaro. Learning Task-Specific Grasps. PhD Thesis, Brown University, May 2023.

  20. H. Fu, J. Yao, O. Gottesman, F. Doshi-Velez, and G.D. Konidaris. Performance Bounds for Model and Policy Transfer in Hidden-parameter MDPs. In Proceedings of the Eleventh International Conference on Learning Representations, May 2023.

  21. O. Gottesman, K. Asadi, C. Allen, S. Lobel, G.D. Konidaris, and M.L. Littman. Coarse-Grained Smoothness for Reinforcement Learning in Metric Spaces. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics, April 2023.

  22. A. Pacheck, S. James, G.D. Konidaris, and H. Kress-Gazit. Automatic Encoding and Repair of Reactive High-Level Tasks with Learned Abstract Representations. The International Journal of Robotics Research 42(4-5), pages 263-288, April 2023.

  23. M. M. Baker, A. New, M. Aguilar-Simon, Z. Al-Halah, S.M.R. Arnold, E. Ben-Iwhiwhu, A.P. Brna, E. Brooks, R.C. Brown, Z. Daniels, A. Daram, F. Delattre, R. Dellana, E. Eaton, H. Fu, K. Grauman, J. Hostetler, S. Iqbal, C. Kent, N. Ketz, S. Kolouri, G.D. Konidaris, D. Kudithipudi, E. Learned-Miller, S. Lee, M.L. Littman, S. Madireddy, J.A. Mendez, E.Q. Nguyen, C.D. Piatko, P.K. Pilly, A. Raghavan, A. Rahman, S.K. Ramakrishnan, N. Ratzlaff, A. Soltoggio, P. Stone, I. Sur, Z. Tang, S. Tiwari, K. Vedder, F. Wang, Z. Xu, A. Yanguas-Gil, H. Yedidsion, S. Yu, G.K. Vallabha. A Domain-Agnostic Approach for Characterization of Lifelong Learning Systems. Neural Networks volume 160, pages 274-296, March 2023.

  24. S. Lobel, S. Rammohan, B. He, S. Yu, and G.D. Konidaris. Q-Functionals for Value-Based Continuous Control. In Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, February 2023.

2022
  1. E. Rosen, B. Abbatematteo, S. Thompson, T. Akbulut, and G.D. Konidaris. On the Role of Structure in Manipulation Skill Learning. In the CoRL 2022 Workshop on Learning, Perception, and Abstraction for Long-Horizon Planning, December 2022.

  2. H. Fu, S. Yu, M.L. Littman, and G.D. Konidaris. Model-based Lifelong Reinforcement Learning with Bayesian Exploration. In Advances in Neural Information Processing Systems 35, December 2022.

  3. S. Tiwari and G.D. Konidaris. Effects of Data Geometry in Early Deep Learning. In Advances in Neural Information Processing Systems 35, December 2022.

  4. C. Lovering, J.Z. Forde, G.D. Konidaris, E. Pavlick, and M.L. Littman. Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex. In Advances in Neural Information Processing Systems 35, December 2022.

  5. C.B. Ames. Towards Efficient and Robust Robot Planning. PhD Thesis, Duke University, October 2022.

  6. B. Ames, J. Morgan, and G.D. Konidaris. IKFlow: Generating Diverse Inverse Kinematics Solutions. IEEE Robotics and Automation Letters 7(3), pages 7177-7184, July 2022.

  7. H. Abdul-Rashid, M. Freeman, B, Abbatematteo, G.D. Konidaris, and D Ritchie. Learning to Infer Kinematic Hierarchies for Novel Object Instances. In Proceedings of the 2022 International Conference on Robotics and Automation, pages 8461-8467, May 2022.

  8. S. Shaw, B. Abbatematteo, and G.D. Konidaris. RMPs for Safe Impedance Control in Contact-Rich Manipulation. In Proceedings of the 2022 International Conference on Robotics and Automation, pages 2707-2713, May 2022.

  9. E. Hsiung, H. Mehta, J. Chu, X. Liu, R. Patel, S. Tellex, and G.D. Konidaris. Generalizing to New Domains by Mapping Natural Language to Lifted LTL. In Proceedings of the 2022 International Conference on Robotics and Automation, pages 3624-3630, May 2022.

  10. M. Berg, S. Tellex, and G.D. Konidaris. Using Language to Generate State Abstractions for Long-Range Planning in Outdoor Environments. In Proceedings of the 2022 International Conference on Robotics and Automation, pages 1888-1895, May 2022.

  11. K. Zheng, R. Chitnis, Y. Sung, G.D. Konidaris, and S. Tellex. Towards Optimal Correlational Object Search. In Proceedings of the 2022 International Conference on Robotics and Automation, pages 7313-7319, May 2022.

  12. S. James, B. Rosman, and G.D. Konidaris. Autonomous Learning of Object-Centric Abstractions for High-Level Planning. In Proceedings of the The Tenth International Conference on Learning Representations, April 2022.

  13. S. Lobel, A. Bagaria, C. Allen, O. Gottesman, G.D. Konidaris. Optimistic Initialization for Exploration in Continuous Control. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, February 2022.

  14. J.Z. Forde, C. Lovering, G.D. Konidaris, E. Pavlick, and M.L. Littman. Where, When & Which Concepts Does AlphaZero Learn? Lessons from the Game of Hex. In Proceedings of the AAAI-22 Workshop on Reinforcement Learning in Games, February 2022.

2021
  1. C. Allen, N. Parikh, O. Gottesman, and G.D. Konidaris. Learning Markov State Abstractions for Deep Reinforcement Learning. Accepted, Advances in Neural Information Processing Systems 34, December 2021.

  2. H. Fu, S. Yu, M.L. Littman, and G.D. Konidaris. Bayesian Exploration for Lifelong Reinforcement Learning. In the NeurIPS 2021 Deep Reinforcement Learning Workshop, December 2021.

  3. S. James. Learning Portable Symbolic Representations. PhD Thesis, University of the Witwatersrand, October 2021.

  4. B. Abbatematteo, E. Rosen, S. Tellex, and G.D. Konidaris. Bootstrapping Motor Skill Learning with Motion Planning. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2021. [Video]

  5. M. Corsaro, S. Tellex, and G.D. Konidaris. Learning to Detect Multi-Modal Grasps for Dexterous Grasping in Dense Clutter. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2021.

  6. K. Zheng, Y. Sung, G.D. Konidaris, and S. Tellex. Multi-Resolution POMDP Planning for Multi-Object Search in 3D. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2021.

  7. A. Bagaria, J. Senthil, M. Slivinski, and G.D. Konidaris. Robustly Learning Composable Options in Deep Reinforcement Learning. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, August 2021.

  8. C. Allen, M. Katz, T. Klinger, G.D. Konidaris, M. Riemer, and G. Tesauro. Efficient Black-Box Planning Using Macro-Actions with Focused Effects. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, August 2021.

  9. C. Allen, M. Katz, T. Klinger, G.D. Konidaris, M. Riemer, and G. Tesauro. Efficient Black-Box Planning Using Macro-Actions with Focused Effects. In the ICAPS 2021 Workshop on Heuristics and Search for Domain-independent Planning, August 2021.

  10. A. Bagaria, J. Senthil, and G.D. Konidaris. Skill Discovery for Exploration and Planning using Deep Skill Graphs. In Proceedings of the Thirty-Eighth International Conference on Machine Learning, July 2021.

  11. B. Tang, M. Corsaro, G.D. Konidaris, S. Nikolaidis, and S. Tellex. Learning Collaborative Pushing and Grasping Policies in Dense Clutter. In Proceedings of the 2021 International Conference on Robotics and Automation, May 2021.

  12. K. Asadi, N. Parikh, R. Parr, G.D. Konidaris, and M.L. Littman. Deep Radial-Basis Value Functions for Continuous Control. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, February 2021.

  13. J. Roy and G.D. Konidaris. Visual Transfer For Reinforcement Learning Via Wasserstein Domain Confusion. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, February 2021.

  14. O. Kroemer, S. Niekum, and G.D. Konidaris. A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms. Journal of Machine Learning Research 22(30), pages 1-82, January 2021.

2020
  1. C. Allen, N. Parikh, and G.D. Konidaris. Learning Markov State Abstractions for Deep Reinforcement Learning. In The NeurIPS 2020 Deep Reinforcement Learning Workshop, December 2020.

  2. N. Parikh, Z. Horvitz, N. Srinivasan, A. Shah, and G.D. Konidaris. Graph Embedding Priors for Multi-task Deep Reinforcement Learning. In The 4th Knowledge Representation and Reasoning Meets Machine Learning Workshop at NeurIPS 2020, December 2020.

  3. E. Rosen, N. Kumar, N. Gopalan, D. Ullman, G.D. Konidaris, and S. Tellex. Building Plannable Representations with Mixed Reality. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2020. [Video]

  4. S. Murray, G.D. Konidaris, and D.J. Sorin. Roadmap Subsampling for Changing Environments. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2020.

  5. B. Keller, M. Draelos, K. Zhou, R. Qian, A.N. Kuo, G.D. Konidaris, K. Hauser, and J.A. Izatt. Optical Coherence Tomography-Guided Robotic Ophthalmic Microsurgery via Reinforcement Learning from Demonstration. IEEE Transactions on Robotics 36(4), pages 1207-1218, August 2020.

  6. S. James, B. Rosman, and G.D. Konidaris. Learning Portable Representations for High-Level Planning. In Proceedings of the Thirty-Seventh International Conference on Machine Learning, July 2020.

  7. N. Gopalan, E. Rosen, G.D. Konidaris, and S. Tellex. Simultaneously Learning Transferable Symbols and Language Groundings from Perceptual Data for Instruction Following. In Robotics: Science and Systems XVI, July 2020. [Video]

  8. A. Bagaria, J. Crowley, J.W.N. Lim, G.D. Konidaris. Skill Discovery for Exploration and Planning using Deep Skill Graphs. In The 4th Lifelong Learning Workshop at ICML 2020, July 2020.

  9. R. Rodriguez-Sanchez, R. Patel, and G.D. Konidaris. On the Relationship Between Structure in Natural Language and Models of Sequential Decision Processes. In The First Workshop on Language in Reinforcement Learning at ICML 2020, July 2020.

  10. M. Merlin, N. Parikh, E. Rosen, and G.D. Konidaris. Locally Observable Markov Decision Processes. In the ICRA 2020 Workshop on Perception, Action, Learning: From Metric-Semantic Scene Understanding to High-level Task Execution, June 2020.

  11. Y. Jinnai, J. Park, M.C. Machado, and G.D. Konidaris. Exploration in Reinforcement Learning with Deep Covering Options. In Proceedings of the Eighth International Conference on Learning Representations, April 2020.

  12. A. Bagaria and G.D. Konidaris. Option Discovery using Deep Skill Chaining. In Proceedings of the Eighth International Conference on Learning Representations, April 2020.

  13. S. Lobel, C. Li, J. Gao, and L. Carin. Toward Amortized Ranking-Critical Training For Collaborative Filtering. In Proceedings of the Eighth International Conference on Learning Representations, April 2020.

2019
  1. A. Bagaria and G.D. Konidaris. Option Discovery using Deep Skill Chaining. In the NeurIPS 2019 Workshop on Deep Reinforcement Learning, December 2019.

  2. S. Kim and G.D. Konidaris. Adaptive Temperature Tuning for Mellowmax in Deep Reinforcement Learning. In the NeurIPS 2019 Workshop on Deep Reinforcement Learning, December 2019.

  3. B. Abbatematteo, S. Tellex, and G.D. Konidaris. Learning to Generalize Kinematic Models to Novel Objects. In Conference on Robot Learning, November 2019. [Video]

  4. V. Cohen, B. Burchfiel, T. Nguyen, N. Gopalan, G.D. Konidaris, and S. Tellex. Grounding Language Attributes to Objects Using Bayesian Eigenobjects. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1187-1194, November 2019.

  5. B. Ames and G.D. Konidaris. Bounded Error LQR-Trees. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 144-150, November 2019.

  6. G.D. Konidaris. On The Necessity of Abstraction. Current Opinion in Behavioral Sciences 29 (Special Issue on Artificial Intelligence), pages 1-7, October 2019.

  7. A. Pacheck, G.D. Konidaris, and H. Kress-Gazit. Automatic Encoding and Repair of Reactive High-Level Tasks with Learned Abstract Representations. In Robotics Research: the 18th Annual Symposium, October 2019.

  8. E. Rosen, D. Whitney, E. Phillips, G. Chien, J. Tompkin, G.D. Konidaris, and S. Tellex. Communicating And Controlling Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays. The International Journal of Robotics Research 38(12-13), October 2019.

  9. S. Kim, K. Asadi, M.L. Littman, and G.D. Konidaris. DeepMellow: Removing the Need for a Target Network in Deep Q-Learning. In Proceedings of the Twenty Eighth International Joint Conference on Artificial Intelligence, pages 2733-2739, August 2019.

  10. B.C.M. Burchfiel. 3D Object Representations for Robot Perception. PhD Thesis, Duke University, July 2019.

  11. S. Murray, W. Floyd-Jones, G.D. Konidaris, and D.J. Sorin. A Programmable Architecture for Robot Motion Planning Acceleration. In Proceedings of the 30th IEEE International Conference on Application-specific Systems, Architectures and Processors, July 2019.

  12. Y. Jinnai, J. Park, D. Abel, and G.D. Konidaris. Discovering Options for Exploration by Minimizing Cover Time. In Proceedings of the 36th International Conference on Machine Learning, pages 3130-3139, June 2019.

  13. Y. Jinnai, D. Abel, D. Hershkowitz, M.L. Littman, and G.D. Konidaris. Finding Options that Minimize Planning Time. In Proceedings of the 36th International Conference on Machine Learning, pages 3120-3129, June 2019.

  14. N. DeMarinis, S. Tellex, V. Kemerlis, G.D. Konidaris, and R. Fonseca. Scanning the Internet for ROS: A View of Security in Robotics Research. In Proceedings of the 2019 International Conference on Robotics and Automation, pages 8514-8521, May 2019.

  15. S.Y. Gadre, E. Rosen, G. Chien, E. Phillips, S. Tellex, and G.D. Konidaris. End-User Robot Programming Using Mixed Reality. In Proceedings of the 2019 International Conference on Robotics and Automation, pages 2707-2713, May 2019.

  16. S. Kim, K. Asadi, M.L. Littman, and G.D. Konidaris. Removing the Target Network from Deep Q-Networks with the Mellowmax Operator (Extended Abstract). In Proceedings of the 2019 International Conference on Autonomous Agents and Multiagent Systems, May 2019.

  17. A. Levy, G.D. Konidaris, R. Platt, and K. Saenko. Learning Multi-Level Hierarchies with Hindsight. In Proceedings of the Eighth International Conference on Learning Representations, May 2019.

  18. Y. Jinnai, D. Abel, J.W. Park, D.E. Hershkowitz, M.L. Littman, and G.D. Konidaris Skill Discovery with Well-Defined Objectives. In Proceedings of the ICLR Workshop on Structure and Priors in Reinforcement Learning, May 2019.

  19. C.J. Bester, S.D. James, and G.D. Konidaris. Multi-Pass Q-Networks for Deep Reinforcement Learning with Parameterised Action Spaces. arXiv:1905.04388, May 2019.

  20. C. Amato, G.D. Konidaris, L.P. Kaelbling, and J.P. How. Modeling and Planning with Macro-Actions in Decentralized POMDPs. Journal of Artificial Intelligence Research 64, pages 817-859, March 2019.

2018
  1. B. Burchfiel and G.D. Konidaris. Hybrid Bayesian Eigenobjects: Combining Linear Subspace and Deep Network Methods for 3D Robot Vision. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 6843-6850, October 2018.

  2. B. Ames, A. Thackston, and G.D. Konidaris. Learning Symbolic Representations for Planning with Parameterized Skills. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 526-533, October 2018.

  3. Y. Zhou, B. Burchfiel, and G.D. Konidaris. Representing, Learning, and Controlling Complex Object Interactions. Autonomous Robots 42(7), pages 1355-1367, October 2018. [Video]

  4. J. Yao, T. Killian, G.D. Konidaris, and F. Doshi-Velez. Direct Policy Transfer via Hidden Parameter Markov Decision Processes. In The 2nd Lifelong Learning: A Reinforcement Learning Approach (LLARLA) Workshop, July 2018.

  5. D. Abel, Y. Jinnai, Y. Guo, G.D. Konidaris, and M.L. Littman. Policy and Value Transfer for Lifelong Reinforcement Learning. In Proceedings of the Thirty-fourth International Conference on Machine Learning, pages 20-29, July 2018.

  6. S. James, B. Rosman, and G.D. Konidaris. Learning to Plan with Portable Symbols. In the ICML/IJCAI/AAMAS 2018 Workshop on Planning and Learning, July 2018.

  7. E.L. Nelson, N.E. Berthier, and G.D. Konidaris. Handedness and Reach-to-Place Kinematics in Adults: Left-Handers Are Not Reversed Right-Handers. Journal of Motor Behavior 50:4, pages 381-391, July 2018.

  8. G.D. Konidaris, L.P. Kaelbling, and T. Lozano-Perez. From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning. Journal of Artificial Intelligence Research 61, pages 215-289, January 2018. [Video]

2017
  1. G. Andersen and G.D. Konidaris. Active Exploration for Learning Symbolic Representations. In Neural Information Processing Systems 30, pages 5009-5019, December 2017.

  2. T. Killian, S. Daulton, G.D. Konidaris, and F. Doshi-Velez. Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes. In Neural Information Processing Systems 30, pages 6250-6261, December 2017.

  3. D. Whitney, E. Rosen, E. Phillips, G.D. Konidaris, and S. Tellex. Comparing Robot Grasping Teleoperation Across Desktop and Virtual Reality with ROS Reality. In Robotics Research: the 17th Annual Symposium, December 2017.

  4. E. Rosen, D. Whitney, E. Phillips, G. Chien, J. Tompkin, G.D. Konidaris, and S. Tellex. Communicating Robot Arm Motion Intent Through Mixed Reality Head-mounted Displays. In Robotics Research: the 17th Annual Symposium, December 2017.

  5. C. Allen, K. Asadi, M. Roderick, A-R. Mohamed, G.D. Konidaris, and M.L. Littman. Mean Actor Critic. arXiv:1709.00503 [stat.ML], September 2017.

  6. B. Burchfiel and G.D. Konidaris. Bayesian Eigenobjects: A Unified Framework for 3D Robot Perception. In Robotics: Science and Systems XIII, July 2017.

  7. S. James, G.D. Konidaris, and B. Rosman. An Analysis of Monte Carlo Tree Search. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3576-3582, February 2017.

2016
  1. C. Amato, G.D. Konidaris, A. Anders, G. Cruz, J.P. How, and L.P. Kaelbling. Policy Search for Multi-Robot Coordination Under Uncertainty. The International Journal of Robotics Research 35(14), pages 1760-1778, December 2016.

  2. T. Killian, G.D. Konidaris, and F. Doshi-Velez. Transfer Learning Across Patient Variations with Hidden Parameter Markov Decision Processes. In the NIPS 2016 Workshop on Machine Learning for Health, December 2016.

  3. S. Murray, W. Floyd-Jones, Y. Qi, G.D. Konidaris and D. Sorin. The Microarchitecture of a Real-Time Robot Motion Planning Accelerator. In Proceedings of the The 49th Annual IEEE/ACM International Symposium on Microarchitecture, October 2016.

  4. G.D. Konidaris. Constructing Abstraction Hierarchies Using a Skill-Symbol Loop. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, July 2016.

  5. F. Doshi-Velez and G.D. Konidaris. Hidden Parameter Markov Decision Processes: A Semiparametric Regression Approach for Discovering Latent Task Parametrizations. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, July 2016.

  6. S. James, B. Rosman and G.D. Konidaris. An Investigation into the Effectiveness of Heavy Rollouts in UCT. In the IJCAI 2016 Workshop on General Intelligence in Game-Playing Agents, July 2016.

  7. S. Murray, W. Floyd-Jones, Y. Qi, D. Sorin and G.D. Konidaris. Robot Motion Planning on a Chip. In Robotics: Science and Systems XII, June 2016.

  8. Y. Zhou and G.D. Konidaris. Representing and Learning Complex Object Interactions. In Robotics: Science and Systems XII, June 2016. [Video]

  9. B. Burchfiel and G.D. Konidaris. Generalized 3D Object Representation using Bayesian Eigenobjects. In the RSS 2016 Workshop on Geometry and Beyond: Representations, Physics, and Scene Understanding for Robotics, June 2016.

  10. W. Masson, P. Ranchod, and G.D. Konidaris. Reinforcement Learning with Parameterized Actions. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 2016.

2015
  1. G.D. Konidaris. What Are Representations For? Invited contribution in IEEE CIS Newsletter on Cognitive and Developmental Systems, page 7, December 2015.

  2. P. S. Thomas, S. Niekum, G. Theocharous, and G. D. Konidaris. Policy Evaluation using the Ω-Return. In Advances in Neural Information Processing Systems 28, pages 334-342, December 2015.

  3. D.H. Zewdie and G.D. Konidaris. Representation Discovery for Kernel-Based Reinforcement Learning. Technical Report MIT-CSAIL-TR-2015-032, MIT Computer Science and Artificial Intelligence Laboratory, November 2015. [Code]

  4. P. Beeson and B. Ames. TRAC-IK: An Open-Source Library for Improved Solving of Generic Inverse Kinematics. In Proceedings of the IEEE/RAS International Conference on Humanoid Robots, pages 928-935, November 2015.

  5. G.D. Konidaris. Constructing Abstraction Hierarchies Using a Skill-Symbol Loop. ArXiv:1509.07582.

  6. P. Ranchod, B. Rosman, G.D. Konidaris. Nonparametric Bayesian Reward Segmentation for Skill Discovery Using Inverse Reinforcement Learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 471-477, September 2015.

  7. D. Wookey and G.D. Konidaris. Regularized Feature Selection in Reinforcement Learning. (Freely accessible draft.) Machine Learning, 100(2), 655-676, September 2015.

  8. G.D. Konidaris, L.P. Kaelbling, and T. Lozano-Perez. Symbol Acquisition for Probabilistic High-Level Planning. In Proceedings of the Twenty Fourth International Joint Conference on Artificial Intelligence, pages 3619-3627, July 2015.

    Please note that this paper has been superceded by our JAIR 2018 paper.

  9. C. Amato, G.D. Konidaris, A. Anders, G. Cruz, J.P. How, and L.P. Kaelbling. Policy Search for Multi-Robot Coordination under Uncertainty. In Robotics: Science and Systems XI, July 2015.

  10. C. Amato, S. Omidshafiei, A Agha-mohammadi, G.D. Konidaris, J.P. How, and L.P. Kaelbling. Probabilistic Planning for Multi-Robot Systems. In Proceedings of the RSS-2015 Workshop on Principles of Multi-Robot Systems, July 2015.

  11. C. Amato, G.D. Konidaris, G. Cruz, C. Maynor, J.P. How, and L.P. Kaelbling. Planning for Decentralized Control of Multiple Robots Under Uncertainty. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation, pages 1241-1248, May 2015.

  12. S. Niekum, S. Osentoski, G.D.Konidaris, S. Chitta, B. Marthi, and Andrew G. Barto. Learning Grounded Finite-State Representations from Unstructured Demonstrations . (Freely accessible draft.) The International Journal of Robotics Research 34(2), pages 131-157, February 2015. [Video]

2014
  1. G.D. Konidaris and F. Doshi-Velez. Hidden Parameter Markov Decision Processes: An Emerging Paradigm for Modeling Families of Related Tasks. In Proceedings of the AAAI 2014 Fall Symposium on Knowledge, Skill, and Behavior Transfer in Autonomous Robots, November 2014.

  2. C. Amato, G.D. Konidaris, J.P. How and L.P. Kaelbling. Decentralized Decision-Making Under Uncertainty for Multi-Robot Teams. In Proceedings of the 2014 IROS Workshop on The Future of Multiple-Robot Research and Its Multiple Identities, September 2014.

  3. E.L. Nelson, G.D. Konidaris, and N.E. Berthier. Hand preference status and reach kinematics in infants. Infant Behavior and Development, 37(4), 615-623.

  4. G.D. Konidaris, L. Kaelbling and T. Lozano-Perez. Constructing Symbolic Representations for High-Level Planning. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1932-1940, July 2014.

    Please note that this paper has been superceded by our JAIR 2018 paper.

  5. N. Hollingsworth, J. Meyer, R. McGee, J. Doering, G.D. Konidaris and L. Kaelbling. Optimizing a Start-Stop Controller using Policy Search. In Proceedings of the Twenty-Sixth Annual Conference on Innovative Applications of Artificial Intelligence, pages 2984-2989, July 2014.

  6. B.C. da Silva, G.D. Konidaris, and A.G. Barto. Active Learning of Parameterized Skills. In Proceedings of the Thirty First International Conference on Machine Learning, pages 1737-1745, June 2014.

  7. B.C. da Silva, G. Baldassarre, G.D. Konidaris, and A.G. Barto. Learning Parameterized Motor Skills on a Humanoid Robot. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 5239-5244, May 2014. [Video]

  8. C. Amato, G.D. Konidaris, G. Cruz, C.A. Maynor, J.P. How and L.P. Kaelbling. Planning for Decentralized Control of Multiple Robots Under Uncertainty. In Proceedings of the 2014 ICAPS Workshop on Planning and Robotics, June 2014. [ArXiv] [Video]

  9. C. Amato, G.D. Konidaris and L.P. Kaelbling. Planning with Macro-Actions in Decentralized POMDPs. In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, pages 1273-1280, May 2014.

2013
  1. A.G. Barto, G.D. Konidaris, and C.M. Vigorito. Behavioral Hierarchy: Exploration and Representation. In Computational and Robotic Models of the Hierarchical Organization of Behavior, Baldassarre, Gianluca; Mirolli, Marco (Eds.), pages 13-46, Springer, Berlin, October 2013.

  2. F. Doshi-Velez and G.D. Konidaris. Hidden Parameter Markov Decision Processes: A Semiparametric Regression Approach for Discovering Latent Task Parametrizations. ArXiV preprint arXiv:1308.3513, August 2013.

  3. C. Trewick, P. Ranchod and G.D. Konidaris. Preferential Targeting of HIV Infected Hubs in a Scale-free Sexual Network. In the Annual Conference of the Computational Social Science Society of the Americas, August 2013. Honorable mention, best paper competition.

  4. G.D. Konidaris. Robots, Skills, and Symbols (Extended Abstract). In Proceedings of the 2013 Workshop on Machine Learning for Interactive Systems, August 2013.

  5. G.D. Konidaris, L.P. Kaelbling and T. Lozano-Perez. Symbol Acquisition for Task-Level Planning. In the AAAI 2013 Workshop on Learning Rich Representations from Low-Level Sensors, July 2013.

    Please note that this paper has been superceded by our JAIR 2018 paper.

  6. G.D. Konidaris, S. Kuindersma, S. Niekum, R.A. Grupen and A.G. Barto. Robot Learning: Some Recent Examples. In Proceedings of the Sixteenth Yale Workshop on Adaptive and Learning Systems, pages 71-76, Center for Systems Science, Dunham Laboratory, Department of Electrical Engineering, Yale University, New Haven CT, June 2013.

  7. G. Goretkin, A. Perez, R. Platt and G.D. Konidaris. Optimal Sampling-Based Planning for Linear-Quadratic Kinodynamic Systems. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2429-2436, May 2013.

2012
  1. F. Doshi-Velez and G.D. Konidaris. Transfer Learning by Discovering Latent Task Parametrizations. In the NIPS 2012 Workshop on Bayesian Nonparametric Models for Reliable Planning And Decision-Making Under Uncertainty, December 2012.

  2. S. Niekum, S. Osentoski, G.D. Konidaris and A.G. Barto. Learning and Generalization of Complex Tasks from Unstructured Demonstrations. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5239-5246, October 2012.

  3. B.C. da Silva, G.D. Konidaris and A.G. Barto. Learning Parameterized Skills. In Proceedings of the Twenty Ninth International Conference on Machine Learning, pages 1679-1686, June 2012.

  4. A. Perez, R. Platt, G.D. Konidaris, L.P. Kaelbling and T. Lozano-Perez. LQR-RRT*: Optimal Sampling-Based Motion Planning with Automatically Derived Extension Heuristics. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2537-2542, May 2012.

  5. G.D. Konidaris, I. Scheidwasser and A.G. Barto. Transfer in Reinforcement Learning via Shared Features. Journal of Machine Learning Research, 13:1333-1371, May 2012.

  6. E.L. Nelson, G.D. Konidaris, N.E. Berthier, M.C. Braun, M.S.F.X. Novak, S.J. Suomi and M.A. Novak. Kinematics of reaching and implications for handedness in rhesus monkey infants. Developmental Psychobiology 54(4), pages 460-467, May 2012.

  7. G.D. Konidaris, S.R. Kuindersma, R.A. Grupen and A.G. Barto. Robot Learning from Demonstration by Constructing Skill Trees. The International Journal of Robotics Research 31(3), pages 360-375, March 2012. (Freely accessible draft.)

2011
  1. G.D. Konidaris, S. Niekum and P.S. Thomas. TDγ: Re-evaluating Complex Backups in Temporal Difference Learning. Advances in Neural Information Processing Systems 24, pages 2402-2410, December 2011.

  2. G.D. Konidaris, S.R. Kuindersma, R.A. Grupen and A.G. Barto. Autonomous Skill Acquisition on a Mobile Manipulator. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 1468-1473, August 2011.

  3. G.D. Konidaris, S. Osentoski and P.S. Thomas. Value Function Approximation in Reinforcement Learning using the Fourier Basis. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 380-385, August 2011.

    Source code for the Fourier Basis is available on our software page.

  4. G.D. Konidaris, S.R. Kuindersma, R.A. Grupen and A.G. Barto. CST: Constructing Skill Trees by Demonstration. In Proceedings of the ICML Workshop on New Developments in Imitation Learning, July 2011.

  5. G.D. Konidaris, S.R. Kuindersma, R.A. Grupen and A.G. Barto. Acquiring Transferrable Mobile Manipulation Skills. In the RSS 2011 Workshop on Mobile Manipulation: Learning to Manipulate, June 2011.

  6. G.D. Konidaris. Autonomous Robot Skill Acquisition. PhD Thesis, Department of Computer Science, University of Massachusetts Amherst, May 2011.

2010
  1. G.D. Konidaris, S.R. Kuindersma, A.G. Barto and R.A. Grupen. Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories. Advances in Neural Information Processing Systems 23, pages 1162-1170, December 2010.

  2. S.R. Kuindersma, G.D. Konidaris, R.A. Grupen, A.G. Barto. Learning from a Single Demonstration: Motion Planning with Skill Segmentation (poster abstract). NIPS Workshop on Learning and Planning in Batch Time Series Data. December 2010.

2009
  1. G.D. Konidaris and A.G. Barto. Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining. Advances in Neural Information Processing Systems 22, pages 1015-1023, December 2009.

    The code for the Pinball domain is available here. You can also download videos of the solutions shown in the paper.

    An earlier version appeared as: G.D. Konidaris and A.G. Barto. Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining. Technical Report UM-CS-2008-24, Department of Computer Science, University of Massachusetts Amherst, July 2008.

  2. G.D. Konidaris and A.G. Barto. Efficient Skill Learning Using Abstraction Selection. In Proceedings of the Twenty First International Joint Conference on Artificial Intelligence, pages 1107-1112, July 2009.

  3. G.D. Konidaris and A.G. Barto. Towards the Autonomous Acquisition of Robot Skill Hierarchies (poster abstract). In the Robotics: Science and Systems Workshop on Bridging the Gap Between High-Level Discrete Representations and Low-Level Continuous Behaviors, Seattle, June 2009.

  4. G.D. Konidaris and S. Osentoski. Value Function Approximation using the Fourier Basis (extended abstract). In the Multidisciplinary Symposium on Reinforcement Learning, Montreal, Canada, June 2009.

    Please note that this paper has been superceded by our AAAI 2011 paper.

    Source code for the Fourier Basis is available on our software page.

  5. G.D. Konidaris and A.G. Barto. Skill Chaining: Skill Discovery in Continuous Domains (extended abstract). In the Multidisciplinary Symposium on Reinforcement Learning, Montreal, Canada, June 2009.

    The code for the Pinball domain is available here.

2008
  1. G.D. Konidaris and A.G. Barto. Sensorimotor Abstraction Selection for Efficient, Autonomous Robot Skill Acquisition. Proceedings of the 7th IEEE International Conference on Development and Learning, pages 151-156, August 2008.

  2. G.D. Konidaris. Autonomous Robot Skill Acquisition (thesis summary). Doctoral Symposium, 23rd National Conference on Artificial Intelligence (AAAI 2008), July 2008.

  3. G.D. Konidaris and S. Osentoski. Value Function Approximation in Reinforcement Learning using the Fourier Basis. Technical Report UM-CS-2008-19, Department of Computer Science, University of Massachusetts Amherst, June 2008.

    Please note that this paper has been superceded by our AAAI 2011 paper.

    Source code for the Fourier Basis is available on our software page.

  4. E.L. Nelson, G.D. Konidaris and N.E. Berthier. Using Real-Time Motion Capture to Measure Handedness in Infants. Poster presentation at the XVIth Biennial International Conference on Infant Studies, Vancouver, Canada, March 2008.

2007
  1. L. Georgopoulos, G.M. Hayes and G.D. Konidaris. A Forward Model of Optic Flow for Detecting External Forces. Proceedings of the IEEE/RSJ 2007 International Conference on Intelligent Robots and Systems, pages 913-918, October 2007.

  2. G.D. Konidaris and A.G. Barto. Building Portable Options: Skill Transfer in Reinforcement Learning. In Proceedings of the Twentieth International Joint Conference on Artificial Intelligence, pages 895-900, January 2007.

    An earlier version appeared as: G.D. Konidaris and A.G. Barto. Building Portable Options: Skill Transfer in Reinforcement Learning. Technical Report UM-CS-2006-17, Department of Computer Science, University of Massachusetts at Amherst, March 2006.

2006
  1. G.D. Konidaris and A.G. Barto. An Adaptive Robot Motivational System. In From Animals to Animats 9: Proceedings of the 9th International Conference on the Simulation of Adaptive Behavior, pages 346-356, September 2006.

  2. G.D. Konidaris. A Framework for Transfer in Reinforcement Learning. In the ICML-06 Workshop on Structural Knowledge Transfer for Machine Learning, Pittsburgh PA, June 2006.

  3. G.D. Konidaris and A.G. Barto. Autonomous Shaping: Knowledge Transfer in Reinforcement Learning. In Proceedings of the Twenty Third International Conference on Machine Learning, pages 489-496, June 2006.

    An earlier version appeared as: G.D. Konidaris and A.G. Barto. Autonomous Shaping: Learning to Predict Reward for Novel States. Technical Report UM-CS-2005-58, Department of Computer Science, University of Massachusetts at Amherst, September 2005.

  4. S. Rauchas, B. Rosman, G.D. Konidaris and I.D. Sanders. Language Performance at High School and Success in First Year Computer Science. In Proceedings of the 37th SIGCSE Technical Symposium on Computer Science Education, pages 398-402, March 2006.

2005
  1. F.J. Stewart, T. Taylor and G.D. Konidaris. METAMorph: Experimenting with Genetic Regulatory Networks for Artificial Development. Proceedings of the VIIIth European Conference on Artificial Life, pages 108-117, September 2005.

  2. A. Stout, G.D Konidaris and A.G. Barto. Intrinsically Motivated Reinforcement Learning: A Promising Framework for Developmental Robot Learning. In The AAAI Spring Symposium on Developmental Robotics, March 2005.

  3. G.D. Konidaris and G.M. Hayes. An Architecture for Behavior-Based Reinforcement Learning. Adaptive Behavior 13(1), pages 5-32, March 2005. (Freely accessible draft.)

2004
  1. G.D. Konidaris and G.M. Hayes. Anticipatory Learning for Focusing Search in Reinforcement Learning Agents, The Second Workshop on Anticipatory Behavior in Adaptive Learning Systems (ABiALS 2004), July 2004.

  2. G.D. Konidaris and G.M. Hayes. Estimating Future Reward in Reinforcement Learning Animats using Associative Learning. In From Animals to Animats 8: Proceedings of the 8th International Conference on the Simulation of Adaptive Behavior, pages 297-304, July 2004.

  3. G.D. Konidaris, T. Taylor and J.C.T Hallam. HydroGen: Automatically Generating Self-Assembly Code for Hydron Units. In Proceedings of The Seventh International Symposium on Distributed Autonomous Robotic Systems, pages 33-42, June 2004.

2003
  1. G.D. Konidaris. Behaviour-Based Reinforcement Learning. Master's Thesis, School of Informatics, University of Edinburgh, 2003.

2002
  1. G.D. Konidaris, D.A. Shell and N. Oren. Evolving Neural Networks to Play the Capture Game. Proceedings of the SAICSIT 2002 Postgraduate Symposium, September 2002.

2001
  1. G.D. Konidaris. Axial Line Placement in Deformed Urban Grids. Honours Dissertation, School of Computer Science, University of the Witwatersrand, 2001.

    Also released as: G.D. Konidaris and I.D. Sanders, Axial Line Placement in Deformed Urban Grids. Technical Report TR-Wits-CS-2002-04, School of Computer Science, University of the Witwatersrand, April 2002.

  2. J. Adler, G.D. Christelis, J.A. Deneys, G.D. Konidaris, G. Lewis, A.G. Lipson, R.L. Phillips, D.K. Scott-Dawkins, D.A. Shell, B.V. Strydom, W.M. Trakman and L.D. Van Gool. Finding Adjacencies in Non-Overlapping Polygons. Electronic Paper, Proceedings of the 2001 SAICSIT Conference, September 2001.