$ ./nampi_selector.sh
nampi_2018 (v2) > nampi_2016 (v1) <
$ ./nips_workshop.sh
_ _____ __ ______ ____ / |/ / _ | / |/ / _ \/ _/ / / __ |/ /|_/ / ___// / /_/|_/_/ |_/_/ /_/_/ /___/ v1.0 Neural Abstract Machines & Program Induction Workshop at NIPS 2016, Barcelona, Spain
> import nampi as np > > print(np.abstract)
Machine intelligence capable of learning complex procedural behavior, inducing (latent) programs, and reasoning with these programs is a key to solving artificial intelligence. The problems of learning procedural behavior and program induction have been studied from different perspectives in many computer science fields such as program synthesis [1], probabilistic programming [2], inductive logic programming [3], reinforcement learning [4], and recently in deep learning. However, despite the common goal, there seems to be little communication and collaboration between the different fields focused on this problem. Recently, there have been many success stories in the deep learning community related to learning neural networks capable of using trainable memory abstractions. This has led to the development of neural networks with differentiable data structures such as Neural Turing Machines [5], Memory Networks [6], Neural Stacks [7, 8], and Hierarchical Attentive Memory [11], among others. Simultaneously, neural program induction models like Neural Program-Interpreters [9] and the Neural Programmer [10] have created much excitement in the field, promising induction of algorithmic behavior, and enabling inclusion of programming languages in the processes of execution and induction, while remaining trainable end-to-end. Trainable program induction models have the potential to make a substantial impact on many problems involving long-term memory, reasoning, and procedural execution, such as question answering, dialog, and robotics. The aim of the NAMPI workshop is to bring together researchers and practitioners from both academia and industry, in the areas of deep learning, program synthesis, probabilistic programming, inductive programming and reinforcement learning, to exchange ideas on the future of program induction with a special focus on neural network models and abstract machines. Through this workshop we look to identify common challenges, exchange ideas and lessons learned from the different fields, as well as establish a (set of) standard evaluation benchmark(s) for approaches that learn with abstraction and/or reason with induced programs.
> print(np.call_for_participation)
We encourage visionary and position papers, as well as work-in-progress submissions. We also accept previously published papers and cross-submissions, but will not include them in the workshop proceedings.
Standard Workshop Paper
The submissions for regular workshop papers should be substantially original and novel. They should not exceed more than 4 pages, excluding references. The submissions should be anonymised as we will strive to organise a double-blind review with open comments on beta.openreview.net. In case double-blind review will not be available upon time of submission, we will do a single-blind review. Authors will be expected to present a poster, and the camera-ready versions must be uploaded to arXiv, as it will be displayed on the workshop webpage. The papers should be submitted to http://openreview.net/group?id=NIPS.cc/2016/workshop/NAMPI
Work-in-progress & cross-submissions
Preliminary work and cross-submissions can be submitted as a 2 page extended abstract. The authors will be expected to present a poster, however, these papers do not count as NAMPI workshop papers and will not be included in the workshop proceedings. Interested authors should submit their extended abstracts to nampi@googlegroups.com. Papers in this category do not need to be anonymised and their selection will be determined at the discretion of the organising committee.
All submissions should be typeset in NIPS format.
Full CFP is available here
> print(np.key_dates)
[EXTENDED] Paper submission deadline: October 30th
[EXTENDED] Notification of acceptance: November 22nd
Final Papers Due: December 1st
NAMPI workshop: December 10th
Deadlines are at 11:59pm PDT.
> print(np.area_header) > for area_of_interest in sorted(np.areas): \ > print("- %s" % area_of_interest)
Areas of interest for discussion and submissions include, but are not limited to:
- Applications of Machine Learning -Based Program Induction
- Compositionality in Representation Learning for Program Induction
- Differentiable Memory
- Differentiable Data Structures
- Function and (sub-)Program Compositionality
- Inductive Logic Programming
- Knowledge Representation in Neural Abstract Structures
- Large-scale Program Induction
- Machine Learning for End-user Development
- Meta-Learning and Self-improving
- Neural Abstract Machines
- Optimisation methods for Program Induction
- Program Induction: Datasets, Tasks, and Evaluation
- Program Synthesis
- Probabilistic Programming
- Reinforcement Learning for Program Induction
- Semantic Parsing for Program Induction
> for speaker in shuffle(np.speakers): \ > print("∘ %s (%s)" % (speaker.name, speaker.affiliation))
- Rob Fergus (Facebook AI Research and New York University)
- Alex Graves (Google DeepMind)
- Edward Grefenstette (Google DeepMind)
- Percy Liang (Stanford University)
- Stephen Muggleton (Imperial College London)
- Doina Precup (McGill University)
- Jürgen Schmidhuber (IDSIA)
- Charles Sutton (University of Edinburgh)
- Daniel Tarlow (Microsoft Research)
- Joshua Tenenbaum (Massachusetts Institute of Technology)
- Martin Vechev (DeepCode and ETH, Zurich)
> print(np.schedule)
08:50-09:00 Opening Remarks 09:00-09:30 Stephen Muggleton: What use is Abstraction in Deep Program Induction? [VIDEO] [slides] 09:30-10:00 Daniel Tarlow: In Search of Strong Generalization: Building Structured Models in the Age of Neural Networks [VIDEO] [slides] 10:00-10:30 Charles Sutton: Learning Program Representation: Symbols to Semantics [VIDEO] [slides] [handouts] 10:30-11:00 Coffee Break 11:00-11:30 Doina Precup: From temporal abstraction to programs [VIDEO] [slides] 11:30-12:00 Rob Fergus: Learning Communication and Abstraction with Neural Nets [VIDEO] [slides] 12:00-12:30 Percy Liang: How Can We Write Large Programs without Thinking? [VIDEO] [slides] 12:30-14:00 Lunch Break 14:00-14:30 Martin Vechev: Program Synthesis and Machine Learning [VIDEO] [slides] 14:30-15:00 Edward Grefenstette: Limitations of RNNs: a computational perspective [VIDEO] [slides] 15:00-16:00 Poster Session and Coffe Break 16:00-16:30 Jürgen Schmidhuber: Learning how to Learn Learning Algorithms: Recursive Self-Improvement [VIDEO] [slides] 16:30-17:00 Joshua Tenenbaum and Kevin Ellis: Bayesian program learning: Prospects for building more human-like AI systems [VIDEO] [slides] 17:00-17:30 Alex Graves: Learning When to Halt With Adaptive Computation Time [VIDEO] [slides] 17:30-18:30 Debate with Percy Liang, Jürgen Schmidhuber, Joshua Tenenbaum, Martin Vechev, Daniel Tarlow and Dawn Song [VIDEO] 18:30-18:40 Best Paper Award and Closing Remarks
> for paper in np.accepted_papers: \ > print("∘ %s (%s)" % (("[BEST PAPER AWARD]\n" if paper.best else '') + paper.authors, paper.title))
- [BEST PAPER AWARD]
Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow. "TerpreT: A Probabilistic Programming Language for Program Induction"
- Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao. "Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision"
- Kenton W. Murray, Jayant Krishnamurthy. "Probabilistic Neural Programs"
- Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H.S.Torr, Pushmeet Kohli. "Learning to superoptimize programs"
- Tristan Deleu, Joseph Dureau. "Learning Operations on a Stack with Neural Turing Machines"
> for paper in np.accepted_extended_abstracts: \ > print("∘ %s (%s)" % (paper.authors, paper.title))
- Fan Yang, Zhilin Yang, William W. Cohen. "A differentiable approach to inductive logic programming"
- Junyoung Chung, Sungjin Ahn, Yoshua Bengio. "Learning Latent Multiscale Structure Using Recurrent Neural Networks"
- Andrew Cropper, Stephen H. Muggleton. "Meta-interpretive learning of efficient logic programs"
- Matko Bošnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel. "A Neural Forth Abstract Machine"
- Sameer Singh, Marco Tulio Ribeiro, Carlos Guestrin. "Programs as Black-Box Explanations"
- Josef Strunc, Joseph R. Davidson, Sungmin Aum. "Gradual Program Induction"
- Marcin Andrychowicz, Karol Kurach. "Learning Efficient Algorithms with Hierarchical Attentive Memory"
- Jacob Andreas, Mitchell Stern, Dan Klein. "Learning to Plan Without a Planner"
> for organizer in shuffle(np.organizers): \ > print("∘ %s (%s)" % (organizer.name, organizer.affiliation))
- Matko Bošnjak (University College London)
- Nando de Freitas (University of Oxford and Google DeepMind)
- Tejas Kulkarni (Massachusetts Institute of Technology and Google DeepMind)
- Arvind Neelakantan (University of Massachusetts Amherst)
- Scott Reed (Google DeepMind)
- Sebastian Riedel (University College London)
- Tim Rocktäschel (University College London)
> for pc_member in np.pc_members: \ > print("∘ %s (%s)" % (pc_member.name, pc_member.affiliation))
- Alexander Gaunt (Microsoft Research)
- Andrew McCallum (University of Massachusetts Amherst)
- Antoine Bordes (Facebook AI Research)
- Armand Joulin (Facebook AI Research)
- Caglar Gulcehre (University of Montreal)
- Dianhuan Lin (Amazon)
- Greg Wayne (Google DeepMind)
- Jacob Andreas (UC Berkeley)
- Jason Weston (Facebook AI Research)
- Lukasz Kaiser (Google Brain)
- Luke Zettlemoyer (University of Washington)
- Marc Brockschmidt (Microsoft Research)
- Sameer Singh (UC Irvine)
- Sumit Chopra (Facebook AI Research)
> for sponsor in np.sponsors: \ > Image.open(sponsor.logo).show()
> for i, reference in enumerate(np.references): \ > print("[%d] %s" % (i + 1, reference))
[1] Manna, Zohar, and Richard Waldinger. "A deductive approach to program synthesis." ACM Transactions on Programming Languages and Systems (TOPLAS) 2.1 (1980): 90-121. [2] McCallum, Andrew, Karl Schultz, and Sameer Singh. "Factorie: Probabilistic programming via imperatively defined factor graphs." Advances in Neural Information Processing Systems. (2009) [3] Muggleton, Stephen, and Luc De Raedt. "Inductive logic programming: Theory and methods." The Journal of Logic Programming 19 (1994): 629-679. [4] Sutton, Richard S., and Andrew G. Barto. "Reinforcement learning: An introduction." Vol. 1. No. 1. Cambridge: MIT press, (1998) [5] Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:1410.5401 (2014). [6] Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." International Conference on Learning Representations (2014). [7] Grefenstette, Edward, et al. "Learning to transduce with unbounded memory." Advances in Neural Information Processing Systems. (2015) [8] Joulin, Armand, and Tomas Mikolov. "Inferring algorithmic patterns with stack-augmented recurrent nets." Advances in Neural Information Processing Systems. (2015) [9] Reed, Scott, and Nando de Freitas. "Neural programmer-interpreters." International Conference on Learning Representations (2016). [10] Neelakantan, Arvind, Quoc V. Le, and Ilya Sutskever. "Neural programmer: Inducing latent programs with gradient descent." International Conference on Learning Representations (2016). [11] Andrychowicz, Marcin, and Karol Kurach. "Learning Efficient Algorithms with Hierarchical Attentive Memory." arXiv preprint arXiv:1602.03218 (2016).