$ ./nampi_selector.sh
> nampi_2018 (v2) < nampi_2016 (v1)
$ ./faim_workshop.sh
███╗ ██╗ █████╗ ███╗ ███╗██████╗ ██╗
████╗ ██║██╔══██╗████╗ ████║██╔══██╗██║
██╔██╗ ██║███████║██╔████╔██║██████╔╝██║
██║╚██╗██║██╔══██║██║╚██╔╝██║██╔═══╝ ██║
██║ ╚████║██║ ██║██║ ╚═╝ ██║██║ ██║
╚═╝ ╚═══╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝ v2.0
Neural Abstract Machines & Program Induction v2 {
: A Federated Artificial Intelligence Meeting (FAIM) workshop (ICML, IJCAI/ECAI, AAMAS)
: Stockholm, Sweden
: July 15th }
> import nampi as np > > print(np.abstract)
Machine intelligence capable of learning complex procedural behavior, inducing (latent) programs, and reasoning with these programs is a key to solving artificial intelligence. The problems of learning procedural behavior and program induction have been studied from different perspectives in many computer science fields such as program synthesis [1], probabilistic programming [2], inductive logic programming [3], reinforcement learning [4], and recently in deep learning. However, despite the common goal, there seems to be little communication and collaboration between the different fields focused on this problem. Recently, there have been a lot of success stories in the deep learning community related to learning neural networks capable of using trainable memory abstractions. This has led to the development of neural networks with differentiable data structures such as Differentiable Neural Computers [5], Memory Networks [6], Neural Stacks [7, 8], and Hierarchical Attentive Memory [9], as well as complex differentiable interpreters [10, 11] able to combine differentiable structures with program induction and execution. Simultaneously, neural program induction models like Neural Program Interpreters [12] and Neural Programmer [13] and DeepCoder [14] have created a lot of excitement in the field, promising induction of algorithmic behavior, programs, and enabling inclusion of programming languages in the processes of execution and induction, while staying end-to-end trainable. Trainable program induction models have the potential to make a substantial impact in many problems involving long-term memory, reasoning, and procedural execution, such as question answering, dialog, and robotics. The aim of the NAMPI workshop is to bring researchers and practitioners from both academia and industry, in the areas of deep learning, program synthesis, probabilistic programming, programming languages, inductive programming and reinforcement learning, together to exchange ideas on the future of program induction with a special focus on neural network models and abstract machines. Through this workshop we look to identify common challenges, exchange ideas among and lessons learned from the different fields, as well as establish a (set of) standard evaluation benchmark(s) for approaches that learn with abstraction and/or reason with induced programs.
> print(np.call_for_participation)
We encourage visionary and position papers, as well as work-in-progress submissions. We also accept previously published papers and cross-submissions, but will not include them in the workshop proceedings.
Standard Workshop Paper
Work-in-progress & cross-submissions
All submissions should be typeset in ICML format.
Full CFP is available here
> print(np.key_dates)
Paper submission deadline: June 8th (extended)
Notification of acceptance: June 23rd
Final Papers Due: June 27th
NAMPI workshop: July 15th
Deadlines are at 11:59pm PDT.
> print(np.area_header) > for area_of_interest in sorted(np.areas): \ > print("- %s" % area_of_interest)
Areas of interest for discussion and submissions include, but are not limited to:
- Applications
- Compositionality in Representation Learning
- Differentiable Memory
- Differentiable Data Structures
- Function and (sub-)Program Compositionality
- Inductive Logic Programming
- Knowledge Representation in Neural Abstract Structures
- Large-scale Program Induction
- Machine learning -guided programming
- Meta-Learning and Self-improving
- Neural Abstract Machines
- Optimisation methods for Program Induction
- Program Induction: Datasets, Tasks, and Evaluation
- Program Synthesis
- Probabilistic Programming
- Reinforcement Learning for Program Induction
- Semantic Parsing
> for speaker in np.speakers: \ > print("∘ %s (%s)" % (speaker.name, speaker.affiliation))
- Richard Evans (DeepMind)
- Sumit Gulwani (Microsoft)
- Brenden Lake (New York University)
- Veselin Raychev (DeepCode)
- Rishabh Singh (Google Brain)
- Satinder Singh (University of Michigan)
- Armando Solar-Lezama (MIT)
- Dawn Song (UC Berkeley)
- Oriol Vinyals (DeepMind)
> print(np.schedule) > print(np.recording_notification)
08:50-09:00 Opening Remarks
# 1st talk set
09:00-09:30 Dawn Song: Deep Learning for Program synthesis: Lessons & Open Challenges [VIDEO] [slides]
09:30-10:00 Armando Solar-Lezama: Program synthesis and ML join forces [VIDEO] [slides]
10:00-10:30 Coffee Break
# 2nd talk set
10:30-11:00 Sumit Gulwani: Programming by Examples: Logical Reasoning meets Machine Learning [VIDEO] [slides]
11:00-11:30 Brenden Lake: Program induction for building more human-like machine learning algorithms [VIDEO]
11:30-12:00 Satinder Singh: Program Induction and Language: Two Vignettes [VIDEO]
12:00-12:30 Oriol Vinyals: Generating Visual Programs with Agents [VIDEO]
12:30-14:00 Lunch Break
# 3rd talk set
14:00-14:30 Rishabh Singh: Neural Meta Program Synthesis [VIDEO] [slides]
14:30-15:00 Veselin Raychev: Interpretable Probabilistic Models for Code [VIDEO]
15:00-15:30 Richard Evans: Differentiable Inductive Logic Programming. [VIDEO] [slides]
15:30-15:35 Best Paper Award
15:35-16:50 Poster Session and Coffe Break with mingling refreshments†
16:50-18:00 Panel with Sumit Gulwani, Brenden Lake, Percy Liang, Rishabh Singh, Armando Solar-Lezama and Joshua Tenenbaum [VIDEO]
All video recordings* can be found in this playlist.
> for paper in np.accepted_papers: \ > print("∘ %s (%s)" % (("[BEST PAPER AWARD] " if paper.best else '') + paper.authors, paper.title))
- Surya Bhupatiraju, Kumar Krishna Agrawal, Rishabh Singh: Towards Mixed Optimization forReinforcement Learning with Program Synthesis [OpenReview] [arXiv]
- Michael Chang, Abhishek Gupta, Thomas Griffiths, Sergey Levine: Automatically Constructing Compositional and Recursive Learners [OpenReview] [arXiv]
- Mehdi Drissi, Olivia Watkins, Aditya Khant, Vivaswat Ojha, Pedro Sandoval, Rakia Segev, Eric Weiner, Robert Keller: Program Language Translation Using a Grammar-Driven Tree-to-Tree Model [OpenReview] [arXiv]
- [BEST PAPER AWARD]‡ Karlis Freivalds, Renars Liepins: Improving the Neural GPU Architecture for Algorithm Learning [OpenReview] [arXiv]
- Pasquale Minervini, Matko Bošnjak, Tim Rocktäschel, Sebastian Riedel: Towards Neural Theorem Proving at Scale [OpenReview] [arXiv]
- Chenglong Wang, Po-Sen Huang, Alex Polozov, Marc Brockschmidt, Rishabh Singh: Execution-Guided Neural Program Decoding [OpenReview] [arXiv]
- Maksym Zavershynskyi, Alex Skidanov, Illia Polosukhin: NAPS: Natural Program Synthesis Dataset [OpenReview] [arXiv]
> for paper in np.accepted_extended_abstracts: \ > print("∘ %s (%s)" % (paper.authors, paper.title))
- Forough Arabshahi, Sameer Singh, Animashree Anandkumar. "Towards Solving Differential Equations through Neural Programming"
- Andres Campero, Aldo Pareja, Tim Klinger, Josh Tenenbaum, Sebastian Riedel. "Theory Learning and Logical Rule Induction with Neural Theorem Proving"
- Ali Davody, Homa Davoudi, Mihai S.Baba, Răzvan V. Florian. "Learning to generate HTML code from images with no supervisory data"
- Sebastijan Dumančić, Tias Guns, Wannes Meert, Hendrik Blockleel. "Auto-encoding Logic Programs"
- Kevin Ellis, Lucas Morales, Mathias Sablé Meyer, Armando Solar-Lezama, Joshua B. Tenenbaum. "DREAMCODER: Bootstrapping Domain-Specific Languages for Neurally-Guided Bayesian Program Learning"
- Roy Fox, Richard Shin, Pieter Abbeel, Ken Goldberg, Dawn Song, Ion Stoica. "Imitation Learning of Hierarchical Programs via Variational Inference"
- Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, Xiaodong He. "Natural Language to Structured Query Generation via Meta-Learning"
- Cătălin Florian Perţicaş, Mihai S. Baba, Homa Davoudi, Răzvan V .Florian. "Hierarchical segmentation of graphical interfaces for Document Object Model reconstruction"
- Richard Shin, Illia Polosukhin, Dawn Song. "Improving Neural Program Synthesis with Inferred Execution Traces"
- Richard Shin, Neel Kant, Kavi Gupta, Christopher Bender, Brandon Trabucco, Rishabh Singh, Dawn Song. "Synthetic Datasets for Neural Program Synthesis"
- Tommaso Soru, Edgard Marx, André Valdestilhas, Diego Esteves, Diego Moussallem, Gustavo Publio. "Neural Machine Translation for Query Construction and Composition"
- Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, Swarat Chaudhuri. "Synthesis of Differentiable Functional Programs for Lifelong Learning"
> for organizer in np.organizers: \ > print("∘ %s (%s)" % (organizer.name, organizer.affiliation))
> for pc_member in np.pc_members: \ > print("∘ %s (%s)" % (pc_member.name, pc_member.affiliation))
> for sponsor in np.sponsors: \ > Image.open(sponsor.logo).show()
> raise np.TravelBursaryException('Travel Bursaries available!')
Thanks to our generous sponsors§, we will be able to support a few travel bursaries. Preference will be given to (student) authors of admitted papers. For more details and the application drop us a line at nampi@googlegroups.com.
> for i, reference in enumerate(np.references): \ > print("[%d] %s" % (i + 1, reference))
[1] Manna, Zohar, and Richard Waldinger. "A deductive approach to program synthesis." ACM Transactions on Programming Languages and Systems (TOPLAS) 2.1 (1980): 90-121. [2] McCallum, Andrew, Karl Schultz, and Sameer Singh. "Factorie: Probabilistic programming via imperatively defined factor graphs." Advances in Neural Information Processing Systems. (2009) [3] Muggleton, Stephen, and Luc De Raedt. "Inductive logic programming: Theory and methods." The Journal of Logic Programming 19 (1994): 629-679. [4] Sutton, Richard S., and Andrew G. Barto. "Reinforcement learning: An introduction." Vol. 1. No. 1. Cambridge: MIT press, (1998) [5] Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:1410.5401 (2014). [6] Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." International Conference on Learning Representations (2014). [7] Grefenstette, Edward, et al. "Learning to transduce with unbounded memory." Advances in Neural Information Processing Systems. (2015) [8] Joulin, Armand, and Tomas Mikolov. "Inferring algorithmic patterns with stack-augmented recurrent nets." Advances in Neural Information Processing Systems. (2015) [9] Andrychowicz, Marcin, and Karol Kurach. "Learning Efficient Algorithms with Hierarchical Attentive Memory." arXiv preprint arXiv:1602.03218 (2016). [10] Bošnjak, Matko, et al. "Programming With a Differentiable Forth Interpreter." International Conference on Machine Learning, (2016) [11] Gaunt, Alexander L., et al. "Terpret: A probabilistic programming language for program induction." arXiv preprint arXiv:1608.04428 (2016). [12] Reed, Scott, and Nando de Freitas. "Neural programmer-interpreters." International Conference on Learning Representations (2016). [13] Neelakantan, Arvind, Quoc V. Le, and Ilya Sutskever. "Neural programmer: Inducing latent programs with gradient descent." International Conference on Learning Representations (2016). [13] Balog, Matej, et al. "Deepcoder: Learning to write programs." arXiv preprint arXiv:1611.01989 (2016).
* recording sponsored by DeepMind and Bloomsbury AI ↩
† mingling session sponsored by UCL Computer Science ↩
‡ best paper award sponsored by DeepCode ↩
§ travel bursaries sponsored by NEAR and Bloomsbury AI ↩