scispace - formally typeset
B

Batu Ozturkler

Researcher at Stanford University

Publications -  13
Citations -  64

Batu Ozturkler is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Convex optimization. The author has an hindex of 3, co-authored 8 publications receiving 23 citations. Previous affiliations of Batu Ozturkler include ETH Zurich.

Papers
More filters
Posted Content

Convex Regularization Behind Neural Reconstruction.

TL;DR: A convex duality framework is advocated that makes a two-layer fully-convolutional ReLU denoising network amenable to convex optimization and offers the optimum training with convex solvers, but also facilitates interpreting training and prediction.
Proceedings ArticleDOI

Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

TL;DR: In this paper , the authors derive equivalent finite-dimensional convex problems that are interpretable and solvable to global optimality for the non-linear dot-product self-attention, and alternative mechanisms such as MLP-mixer and Fourier Neural Operator (FNO).
Posted Content

Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization.

TL;DR: In this article, an analytic framework based on convex duality is introduced to obtain exact convex representations of weight-decay regularized ReLU networks with BN, which can be trained in polynomial-time.
Proceedings ArticleDOI

ThinkSum: Probabilistic reasoning over sets using large language models

TL;DR: It is argued that because the probabilistic inference in T HINK S UM is performed outside of calls to the LLM, it is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs.
Posted Content

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

TL;DR: In this paper, the authors analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which GAN can be solved exactly with convex optimization approaches, or can be represented as convexconcave games.