Anup B. Rao. July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden. Conference of Learning Theory (COLT), 2021, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs [pdf] [talk] [poster] This is the academic homepage of Yang Liu (I publish under Yang P. Liu). Yujia Jin. 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. I am broadly interested in optimization problems, sometimes in the intersection with machine learning Verified email at stanford.edu - Homepage. However, many advances have come from a continuous viewpoint. If you see any typos or issues, feel free to email me. ", "General variance reduction framework for solving saddle-point problems & Improved runtimes for matrix games. Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). KTH in Stockholm, Sweden, and my BSc + MSc at the ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods Another research focus are optimization algorithms. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). I am a fifth-and-final-year PhD student in the Department of Management Science and Engineering at Stanford in the Operations Research group. Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. [pdf] The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle Done under the mentorship of M. Malliaris. [pdf] [poster] We forward in this generation, Triumphantly. The authors of most papers are ordered alphabetically. I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. to be advised by Prof. Dongdong Ge. ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper I enjoy understanding the theoretical ground of many algorithms that are Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss This work characterizes the benefits of averaging techniques widely used in conjunction with stochastic gradient descent (SGD). This improves upon previous best known running times of O (nr1.5T-ind) due to Cunningham in 1986 and (n2T-ind+n3) due to Lee, Sidford, and Wong in 2015. We are excited to have Professor Sidford join the Management Science & Engineering faculty starting Fall 2016. One research focus are dynamic algorithms (i.e. van vu professor, yale Verified email at yale.edu. . Before attending Stanford, I graduated from MIT in May 2018. Semantic parsing on Freebase from question-answer pairs. Selected recent papers . 4 0 obj We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. Google Scholar; Probability on trees and . "FV %H"Hr ![EE1PL* rP+PPT/j5&uVhWt :G+MvY c0 L& 9cX& I am an Assistant Professor in the School of Computer Science at Georgia Tech. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. [pdf] [talk] [poster] theses are protected by copyright. Aleksander Mdry; Generalized preconditioning and network flow problems Intranet Web Portal. The following articles are merged in Scholar. Yair Carmon. how . Before Stanford, I worked with John Lafferty at the University of Chicago. Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. CV (last updated 01-2022): PDF Contact. /Length 11 0 R to appear in Innovations in Theoretical Computer Science (ITCS), 2022, Optimal and Adaptive Monteiro-Svaiter Acceleration He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. Google Scholar Digital Library; Russell Lyons and Yuval Peres. by Aaron Sidford. to appear in Neural Information Processing Systems (NeurIPS), 2022, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching with Yair Carmon, Arun Jambulapati and Aaron Sidford [pdf] [talk] [poster] Lower bounds for finding stationary points I, Accelerated Methods for NonConvex Optimization, SIAM Journal on Optimization, 2018 (arXiv), Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification. [pdf] [talk] I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. Many of my results use fast matrix multiplication 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. BayLearn, 2021, On the Sample Complexity of Average-reward MDPs I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. [pdf] aaron sidford cvnatural fibrin removalnatural fibrin removal Aaron Sidford is an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. ", "Sample complexity for average-reward MDPs? Goethe University in Frankfurt, Germany. Faculty and Staff Intranet. in math and computer science from Swarthmore College in 2008. Conference on Learning Theory (COLT), 2015. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. Links. Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate Yujia Jin. Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. Outdated CV [as of Dec'19] Students I am very lucky to advise the following Ph.D. students: Siddartha Devic (co-advised with Aleksandra Korolova . AISTATS, 2021. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. Selected for oral presentation. with Yang P. Liu and Aaron Sidford. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. I am fortunate to be advised by Aaron Sidford . I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Articles 1-20. Previously, I was a visiting researcher at the Max Planck Institute for Informatics and a Simons-Berkeley Postdoctoral Researcher. Aaron Sidford joins Stanford's Management Science & Engineering department, launching new winter class CS 269G / MS&E 313: "Almost Linear Time Graph Algorithms." Contact. ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! Huang Engineering Center Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. Yin Tat Lee and Aaron Sidford. 113 * 2016: The system can't perform the operation now. Here are some lecture notes that I have written over the years. arXiv | conference pdf (alphabetical authorship) Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with . Thesis, 2016. pdf. Annie Marsden. % ", "A short version of the conference publication under the same title. (arXiv pre-print) arXiv | pdf, Annie Marsden, R. Stephen Berry. . [name] = yangpliu, Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, Online Edge Coloring via Tree Recurrences and Correlation Decay, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, Discrepancy Minimization via a Self-Balancing Walk, Faster Divergence Maximization for Faster Maximum Flow. << Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. MS&E welcomes new faculty member, Aaron Sidford ! [pdf] [poster] when do tulips bloom in maryland; indo pacific region upsc Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. " Geometric median in nearly linear time ." In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, Pp. We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . resume/cv; publications. Aaron Sidford. Email: sidford@stanford.edu. 4026. Management Science & Engineering Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. with Yair Carmon, Kevin Tian and Aaron Sidford I also completed my undergraduate degree (in mathematics) at MIT. The design of algorithms is traditionally a discrete endeavor. Some I am still actively improving and all of them I am happy to continue polishing. rl1 MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f From 2016 to 2018, I also worked in I often do not respond to emails about applications. You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). Try again later. ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. Aaron Sidford. sidford@stanford.edu. This site uses cookies from Google to deliver its services and to analyze traffic. Here are some lecture notes that I have written over the years. Contact: dwoodruf (at) cs (dot) cmu (dot) edu or dpwoodru (at) gmail (dot) com CV (updated July, 2021) The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). To appear as a contributed talk at QIP 2023 ; Quantum Pseudoentanglement. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization ", "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space. stream The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020).