scispace - formally typeset
Open AccessPosted Content

Julia: A Fresh Approach to Numerical Computing

TLDR
The Julia programming language as discussed by the authors combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing, which is designed to be easy and fast.
Abstract
Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience.

read more

Citations
More filters
Journal ArticleDOI

JuMP: A Modeling Language for Mathematical Optimization

TL;DR: JuMP as mentioned in this paper is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax.
Journal ArticleDOI

Gene Regulatory Network Inference from Single-Cell Data Using Multivariate Information Measures.

TL;DR: This work develops PIDC, a fast, efficient algorithm that uses partial information decomposition (PID) to identify regulatory relationships between genes and demonstrates that the higher-order information captured by PIDC allows it to outperform pairwise mutual information-based algorithms when recovering true relationships present in simulated data.
Journal ArticleDOI

The (black) art of runtime evaluation: Are we comparing algorithms or implementations?

TL;DR: This work substantiates its points with extensive experiments, using clustering and outlier detection methods with and without index acceleration, and discusses what one can learn from evaluations, whether experiments are properly designed, and what kind of conclusions one should avoid.
Journal ArticleDOI

Demonstration of quantum advantage in machine learning

TL;DR: In this article, a quantum algorithm for learning parity with noise on a five-qubit superconducting processor is presented, and the authors show that the quantum algorithm finds the solution much faster than by classical methods.
Posted ContentDOI

Network inference from single-cell data using multivariate information measures

TL;DR: PIDC, a fast, efficient algorithm that uses partial information decomposition (PID) to identify regulatory relationships between genes, is developed and demonstrated that the higher order information captured by PIDC allows it to outperform pairwise mutual information-based algorithms when recovering true relationships present in simulated data.
References
More filters
Journal ArticleDOI

R: A Language for Data Analysis and Graphics

TL;DR: In this article, the authors discuss their experience designing and implementing a statistical computing language, which combines what they felt were useful features from two existing computer languages, and they feel that the new language provides advantages in the areas of portability, computational efficiency, memory management, and scope.
Journal ArticleDOI

The NumPy array: a structure for efficient numerical computation

TL;DR: This effort shows, NumPy performance can be improved through three techniques: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts.
Proceedings ArticleDOI

LLVM: a compilation framework for lifelong program analysis & transformation

TL;DR: The design of the LLVM representation and compiler framework is evaluated in three ways: the size and effectiveness of the representation, including the type information it provides; compiler performance for several interprocedural problems; and illustrative examples of the benefits LLVM provides for several challenging compiler problems.
Book

Introduction to Linear Algebra

TL;DR: The fifth edition of this hugely successful textbook retains the quality of earlier editions while at the same time seeing numerous minor improvements and major additions as mentioned in this paper, including a new chapter on singular values and singular vectors, including ways to analyze a matrix of data.
Journal ArticleDOI

Basic Linear Algebra Subprograms for Fortran Usage

TL;DR: A package of 38 low level subprograms for many of the basic operations of numerical linear algebra is presented, intended to be used with FORTRAN.
Related Papers (5)
Trending Questions (1)
Are if statements slow in Julia?

Julia challenges the notion that high-level dynamic programs have to be slow, suggesting that if statements may not necessarily be slow in Julia due to its design for speed and ease.