Wednesday, November 29, 2023
HomeArtificial IntelligenceGoogle AI Weblog: Google at ICLR 2022

Google AI Weblog: Google at ICLR 2022

The tenth Worldwide Convention on Studying Representations (ICLR 2022) kicks off this week, bringing collectively researchers, entrepreneurs, engineers and college students alike to debate and discover the quickly advancing discipline of deep studying. Fully digital this 12 months, ICLR 2022 provides convention and workshop tracks that current a few of the newest analysis in deep studying and its purposes to areas starting from laptop imaginative and prescient, speech recognition and textual content understanding to robotics, computational biology, and extra.

As a Platinum Sponsor of ICLR 2022 and Champion DEI Motion Fund contributor, Google can have a strong presence with almost 100 accepted publications and intensive participation on organizing committees and in workshops. In case you have registered for ICLR 2022, we hope you’ll watch our talks and study concerning the work performed at Google to handle advanced issues that have an effect on billions of individuals. Right here you possibly can study extra concerning the analysis we shall be presenting in addition to our normal involvement at ICLR 2022 (these with Google affiliations in daring).

Senior Space Chairs:

Consists of: Been Kim, Dale Schuurmans, Sergey Levine

Space Chairs:

Consists of: Adam White, Aditya Menon, Aleksandra Faust, Amin Karbasi, Amir Globerson, Andrew Dai, Balaji Lakshminarayanan, Behnam Neyshabur, Ben Poole, Bhuwan Dhingra, Bo Dai, Boqing Gong, Cristian Sminchisescu, David Ha, David Woodruff, Denny Zhou, Dipanjan Das, Dumitru Erhan, Dustin Tran, Emma Strubell, Eunsol Choi, George Dahl, George Tucker, Hanie Sedghi, Heinrich Jiang, Hossein Mobahi, Hugo Larochelle, Izhak Shafran, Jasper Snoek, Jean-Philippe Vert, Jeffrey Pennington, Justin Gilmer, Karol Hausman, Kevin Swersky, Krzysztof Choromanski, Mathieu Blondel, Matt Kusner, Michael Ryoo, Ming-Hsuan Yang, Minmin Chen, Mirella Lapata, Mohammad Ghavamzadeh, Mohammad Norouzi, Naman Agarwal, Nicholas Carlini, Olivier Bachem, Piyush Rai, Prateek Jain, Quentin Berthet, Richard Nock, Rose Yu, Sewoong Oh, Silvio Lattanzi, Slav Petrov, Srinadh Bhojanapalli, Tim Salimans, Ting Chen, Tong Zhang, Vikas Sindhwani, Weiran Wang, William Cohen, Xiaoming Liu

Workflow Chairs:

Consists of: Yaguang Li

Variety Fairness & Inclusion Chairs:

Consists of: Rosanne Liu

Invited Talks

Past Interpretability: Creating a Language to Form Our Relationships with AI

Google Speaker: Been Kim

Do You See What I See? Giant-Scale Studying from Multimodal Movies

Google Speaker: Cordelia Schmid


Hyperparameter Tuning with Renyi Differential Privateness – 2022 Excellent Paper Award

Nicolas Papernot, Thomas Steinke

MIDI-DDSP: Detailed Management of Musical Efficiency by way of Hierarchical Modeling

Yusong Wu, Ethan Manilow, Yi Deng, Rigel Swavely, Kyle Kastner, Tim Cooijmans, Aaron Courville, Cheng-Zhi Anna Huang, Jesse Engel

The Data Geometry of Unsupervised Reinforcement Studying

Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Studying Strides in Convolutional Neural Networks – 2022 Excellent Paper Award

Rachid Riad*, Olivier Teboul, David Grangier, Neil Zeghidour

Poisoning and Backdooring Contrastive Studying

Nicholas Carlini, Andreas Terzis

Coordination Amongst Neural Modules By means of a Shared International Workspace

Anirudh Goyal, Aniket Didolkar, Alex Lamb, Kartikeya Badola, Nan Rosemary Ke, Nasim Rahaman, Jonathan Binas, Charles Blundell, Michael Mozer, Yoshua Bengio

Tremendous-Tuned Language Fashions Are Zero-Shot Learners (see the weblog publish)

Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le

Giant Language Fashions Can Be Robust Differentially Personal Learners

Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto

Progressive Distillation for Quick Sampling of Diffusion Fashions

Tim Salimans, Jonathan Ho

Exploring the Limits of Giant Scale Pre-training

Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, Hanie Sedghi

Scarf: Self-Supervised Contrastive Studying Utilizing Random Function Corruption

Dara Bahri, Heinrich Jiang, Yi Tay, Donald Metzler

Scalable Sampling for Nonsymmetric Determinantal Level Processes

Insu Han, Mike Gartrell, Jennifer Gillenwater, Elvis Dohmatob, Amin Karbasi

When Imaginative and prescient Transformers Outperform ResNets with out Pre-training or Robust Knowledge Augmentations

Xiangning Chen, Cho-Jui Hsieh, Boqing Gong

ViTGAN: Coaching GANs with Imaginative and prescient Transformers

Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu

Generalized Resolution Transformer for Offline Hindsight Data Matching

Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu

The MultiBERTs: BERT Reproductions for Robustness Evaluation

Thibault Sellam, Steve Yadlowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D’Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ellie Pavlick

Scaling Legal guidelines for Neural Machine Translation

Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, Colin Cherry

Interpretable Unsupervised Variety Denoising and Artefact Removing

Mangal Prakash, Mauricio Delbracio, Peyman Milanfar, Florian Jug

Understanding Latent Correlation-Primarily based Multiview Studying and Self-Supervision: An Identifiability Perspective

Qi Lyu, Xiao Fu, Weiran Wang, Songtao Lu

Memorizing Transformers

Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy

Churn Discount by way of Distillation

Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh

DR3: Worth-Primarily based Deep Reinforcement Studying Requires Express Regularization

Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

Path Auxiliary Proposal for MCMC in Discrete House

Haoran Solar, Hanjun Dai, Wei Xia, Arun Ramamurthy

On the Relation Between Statistical Studying and Perceptual Distances

Alexander Hepburn, Valero Laparra, Raul Santos-Rodriguez, Johannes Ballé, Jesús Malo

Chance Earlier than Utility: Studying And Utilizing Hierarchical Affordances

Robby Costales, Shariq Iqbal, Fei Sha

MT3: Multi-Activity Multitrack Music Transcription

Josh Gardner*, Ian Simon, Ethan Manilow*, Curtis Hawthorne, Jesse Engel

Bayesian Neural Community Priors Revisited

Vincent Fortuin, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, Laurence Aitchison

GradMax: Rising Neural Networks utilizing Gradient Data

Utku Evci, Bart van Merrienboer, Thomas Unterthiner, Fabian Pedregosa, Max Vladymyrov

Scene Transformer: A Unified Structure for Predicting Future Trajectories of A number of Brokers

Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens

The Position of Pretrained Representations for the OOD Generalization of RL Brokers

Frederik Träuble, Andrea Dittadi, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

Autoregressive Diffusion Fashions

Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans

The Position of Permutation Invariance in Linear Mode Connectivity of Neural Networks

Rahim Entezari, Hanie Seghi, Olga Saukh, Behnam Neyshabur

DISSECT: Disentangled Simultaneous Explanations by way of Idea Traversals

Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard

Anisotropic Random Function Regression in Excessive Dimensions

Gabriel C. Mel, Jeffrey Pennington

Open-Vocabulary Object Detection by way of Imaginative and prescient and Language Information Distillation

Xiuye Gu, Tsung-Yi Lin*, Weicheng Kuo, Yin Cui

MCMC Ought to Combine: Studying Vitality-Primarily based Mannequin with Circulate-Primarily based Spine

Erik Nijkamp*, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Music-Chun Zhu, Ying Nian Wu

Impact of Scale on Catastrophic Forgetting in Neural Networks

Vinay Ramasesh, Aitor Lewkowycz, Ethan Dyer

Incremental False Unfavorable Detection for Contrastive Studying

Tsai-Shien Chen, Wei-Chih Hung, Hung-Yu Tseng, Shao-Yi Chien, Ming-Hsuan Yang

In the direction of Evaluating the Robustness of Neural Networks Realized by Transduction

Jiefeng Chen, Xi Wu, Yang Guo, Yingyu Liang, Somesh Jha

What Do We Imply by Generalization in Federated Studying?

Honglin Yuan*, Warren Morningstar, Lin Ning, Karan Singhal

ViDT: An Environment friendly and Efficient Totally Transformer-Primarily based Object Detector

Hwanjun Music, Deqing Solar, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang

Measuring CLEVRness: Black-Field Testing of Visible Reasoning Fashions

Spyridon Mouselinos, Henryk Michalewski, Mateusz Malinowski

Knowledge of Committees: An Neglected Strategy To Sooner and Extra Correct Fashions (see the weblog publish)

Xiaofang Wang, Dan Kondratyuk, Eric Christiansen, Kris M. Kitani, Yair Alon (prev. Movshovitz-Attias), Elad Eban

Leveraging Unlabeled Knowledge to Predict Out-of-Distribution Efficiency

Saurabh Garg*, Sivaraman Balakrishnan, Zachary C. Lipton, Behnam Neyshabur, Hanie Sedghi

Knowledge-Pushed Offline Optimization for Architecting {Hardware} Accelerators (see the weblog publish)

Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

Diurnal or Nocturnal? Federated Studying of Multi-branch Networks from Periodically Shifting Distributions

Chen Zhu*, Zheng Xu, Mingqing Chen, Jakub Konecny, Andrew Exhausting, Tom Goldstein

Coverage Gradients Incorporating the Future

David Venuto, Elaine Lau, Doina Precup, Ofir Nachum

Discrete Representations Strengthen Imaginative and prescient Transformer Robustness

Chengzhi Mao*, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, Irfan Essa

SimVLM: Easy Visible Language Mannequin Pretraining with Weak Supervision (see the weblog publish)

Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao

Neural Stochastic Twin Dynamic Programming

Hanjun Dai, Yuan Xue, Zia Syed, Dale Schuurmans, Bo Dai

PolyLoss: A Polynomial Growth Perspective of Classification Loss Capabilities

Zhaoqi Leng, Mingxing Tan, Chenxi Liu, Ekin Dogus Cubuk, Xiaojie Shi, Shuyang Cheng, Dragomir Anguelov

Data Prioritization By means of Empowerment in Visible Mannequin-Primarily based RL

Homanga Bharadhwaj*, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine

Worth Operate Areas: Ability-Centric State Abstractions for Lengthy-Horizon Reasoning

Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander Toshev, Sergey Levine, Brian Ichter

Understanding and Leveraging Overparameterization in Recursive Worth Estimation

Chenjun Xiao, Bo Dai, Jincheng Mei, Oscar Ramirez, Ramki Gummadi, Chris Harris, Dale Schuurmans

The Effectivity Misnomer

Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, Yi Tay

On the Position of Inhabitants Heterogeneity in Emergent Communication

Mathieu Rita, Florian Strub, Jean-Bastien Grill, Olivier Pietquin, Emmanuel Dupoux

No One Illustration to Rule Them All: Overlapping Options of Coaching Strategies

Raphael Gontijo-Lopes, Yann Dauphin, Ekin D. Cubuk

Knowledge Poisoning Received’t Save You From Facial Recognition

Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr

AdaMatch: A Unified Strategy to Semi-Supervised Studying and Area Adaptation

David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin

Most Entropy RL (Provably) Solves Some Strong RL Issues

Benjamin Eysenbach, Sergey Levine

Auto-scaling Imaginative and prescient Transformers With out Coaching

Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Music, Zhangyang Wang, Denny Zhou

Optimizing Few-Step Diffusion Samplers by Gradient Descent

Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

ExT5: In the direction of Excessive Multi-Activity Scaling for Switch Studying

Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Fortuitous Forgetting in Connectionist Networks

Hattie Zhou, Ankit Vani, Hugo Larochelle, Aaron Courville

Evading Adversarial Instance Detection Defenses with Orthogonal Projected Gradient Descent

Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini

Benchmarking the Spectrum of Agent Capabilities

Danijar Hafner

Charformer: Quick Character Transformers by way of Gradient-Primarily based Subword Tokenization

Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Received Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler

Point out Reminiscence: Incorporating Textual Information into Transformers By means of Entity Point out Consideration

Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, William Cohen

Eigencurve: Optimum Studying Price Schedule for SGD on Quadratic Aims with Skewed Hessian Spectrums

Rui Pan, Haishan Ye, Tong Zhang

Scale Effectively: Insights from Pre-training and Tremendous-Tuning Transformers

Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Received Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler

Omni-Scale CNNs: A Easy and Efficient Kernel Dimension Configuration for Time Collection Classification

Wensi Tang, Guodong Lengthy, Lu Liu,Tianyi Zhou, Michael Blumenstein, Jing Jiang

Embedded-Mannequin Flows: Combining the Inductive Biases of Mannequin-Free Deep Studying and Express Probabilistic Modeling

Gianluigi Silvestri, Emily Fertig, Dave Moore, Luca Ambrogioni

Publish Hoc Explanations Could also be Ineffective for Detecting Unknown Spurious Correlation

Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim

Axiomatic Explanations for Visible Search, Retrieval, and Similarity Studying

Mark Hamilton, Scott Lundberg, Stephanie Fu, Lei Zhang, William T. Freeman

Pix2seq: A Language Modeling Framework for Object Detection (see the weblog publish)

Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton

Mirror Descent Coverage Optimization

Manan Tomar, Lior Shani, Yonathan Efroni, Mohammad Ghavamzadeh

CodeTrek: Versatile Modeling of Code Utilizing an Extensible Relational Illustration

Pardis Pashakhanloo, Aaditya Naik, Yuepeng Wang, Hanjun Dai, Petros Maniatis, Mayur Naik

Conditional Object-Centric Studying From Video

Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, Klaus Greff

A Loss Curvature Perspective on Coaching Instabilities of Deep Studying Fashions

Justin Gilmer, Behrooz Ghorbani, Ankush Garg, Sneha Kudugunta, Behnam Neyshabur, David Cardoze, George E. Dahl, Zack Nado, Orhan Firat

Autonomous Reinforcement Studying: Formalism and Benchmarking

Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

TRAIL: Close to-Optimum Imitation Studying with Suboptimal Knowledge

Mengjiao Yang, Sergey Levine, Ofir Nachum

Minimax Optimization With Clean Algorithmic Adversaries

Tanner Fiez, Lillian J. Ratliff, Chi Jin, Praneeth Netrapalli

Unsupervised Semantic Segmentation by Distilling Function Correspondences

Mark Hamilton, Zhoutong Zhang, Bharath Hariharan, Noah Snavely, William T. Freeman

InfinityGAN: In the direction of Infinite-Pixel Picture Synthesis

Chieh Hubert Lin, Hsin-Ying Lee, Yen-Chi Cheng, Sergey Tulyakov, Ming-Hsuan Yang

Shuffle Personal Stochastic Convex Optimization

Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng

Hybrid Random Options

Krzysztof Choromanski, Haoxian Chen, Han Lin, Yuanzhe Ma, Arijit Sehanobish, Deepali Jain, Michael S Ryoo, Jake Varley, Andy Zeng, Valerii Likhosherstov, Dmitry Kalashnikov, Vikas Sindhwani, Adrian Weller

Vector-Quantized Picture Modeling With Improved VQGAN

Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, Yonghui Wu

On the Advantages of Most Chance Estimation for Regression and Forecasting

Pranjal Awasthi, Abhimanyu Das, Rajat Sen, Ananda Theertha Suresh

Surrogate Hole Minimization Improves Sharpness-Conscious Coaching

Juntang Zhuang*, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha C. Dvornek, Sekhar Tatikonda, James S. Duncan, Ting Liu

On-line Goal Q-learning With Reverse Expertise Replay: Effectively Discovering the Optimum Coverage for Linear MDPs

Naman Agarwal, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Syomantak Chaudhuri

CrossBeam: Studying to Search in Backside-Up Program Synthesis

Kensen Shi, Hanjun Dai, Kevin Ellis, Charles Sutton


Workshop on the Parts of Reasoning: Objects, Construction, and Causality (OSC)

Organizers embody: Klaus Greff, Thomas Kipf

Workshop on Agent Studying in Open-Endedness

Organizers embody: Krishna Srinivasan

Audio system embody: Natasha Jaques, Danijar Hafner

Wiki-M3L: Wikipedia and Multi-modal & Multi-lingual Analysis

Organizers embody: Klaus Greff, Thomas Kipf

Audio system embody: Jason Baldridge, Tom Duerig

Setting Up ML Analysis Requirements to Speed up Progress

Organizers embody: Rishabh Agarwal

Audio system and Panelists embody: Katherine Heller, Sara Hooker, Corinna Cortes

From Cells to Societies: Collective Studying Throughout Scales

Organizers embody: Mark Sandler, Max Vladymyrov

Audio system embody: Blaise Aguera y Arcas, Alexander Mordvintsev, Michael Mozer

Emergent Communication: New Frontiers

Audio system embody: Natasha Jaques

Deep Studying for Code

Organizers embody: Jonathan Herzig

GroundedML: Anchoring Machine Studying in Classical Algorithmic Concept

Audio system embody: Gintare Karolina Dziugaite

Generalizable Coverage Studying within the Bodily World

Audio system and Panelists embody: Mrinal Kalakrishnan

CoSubmitting Summer time (CSS) Workshop

Organizers embody: Rosanne Liu

*Work performed whereas at Google.  



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments