scispace - formally typeset
Search or ask a question

Showing papers by "Apple Inc. published in 2022"


Journal ArticleDOI
TL;DR: In this paper, caseins and methylcellulose (MC) were selected as building materials to prepare a class of mixed gels by adding glucono-δ-lactone (GDL) to induce the gelation of composite MC/casein systems, where the casein concentration was fixed at 8.0% and the MC concentration varied from 0 to 1.0%.

8 citations


Journal ArticleDOI
ceerrc1
13 Sep 2022

Journal ArticleDOI
Artem A. Krotov1
27 Aug 2022

Proceedings ArticleDOI
Baoshan Xue1
04 Mar 2022
TL;DR: In this article , an advanced deep neural network consisting of tailored structural elements having capabilities for detecting small abstract level features from the image data was developed and tested for binary classification of MRI-scans.
Abstract: In this study a challenging binary classification was done with a large (>100GB, hundreds of subjects in total) annotated MRI-dataset in public a competition between over thousand teams with their proposals for the problem. For the classification an advanced deep neural network consisting of tailored structural elements having capabilities for detecting small abstract level features from the image data was developed and tested. The resulted ROC was 0,74 in the test data ($\mathrm{N}=87$) and 0,55 in the extended test data phase; the results were among other proposals (>1000 solutions) in the top 5-25%, correspondingly. The relevance and accuracy of the solution was discussed including a specific finding about interesting differences in the classification performance between the data from different types of MRI-scans, being in line with other independent research findings. This may indicate the results of the deep neural network provide some additional value about the presence of MGMT. However, in general level the topic is still open and requires further studies to achieve a better understanding.


Journal ArticleDOI
Noy Barak1
TL;DR: In this paper , the authors show that curvature harms test performance through two new mechanisms, the shift-curvature and bias-Curvature, in addition to a known parameter-covariance mechanism, and derive a new, explicit SGD steady-state distribution showing that SGD optimizes an effective potential related to but different from train loss.
Abstract: Abstract A longstanding debate surrounds the related hypotheses that low-curvature minima generalize better, and that stochastic gradient descent (SGD) discourages curvature. We offer a more complete and nuanced view in support of both hypotheses. First, we show that curvature harms test performance through two new mechanisms, the shift-curvature and bias-curvature, in addition to a known parameter-covariance mechanism. The shift refers to the difference between train and test local minima, and the bias and covariance are those of the parameter distribution. These three curvature-mediated contributions to test performance are reparametrization-invariant even though curvature itself is not. Although the shift is unknown at training time, the shift-curvature as well as the other mechanisms can still be mitigated by minimizing overall curvature. Second, we derive a new, explicit SGD steady-state distribution showing that SGD optimizes an effective potential related to but different from train loss, and that SGD noise mediates a trade-off between low-loss versus low-curvature regions of this effective potential. Third, combining our test performance analysis with the SGD steady state shows that for small SGD noise, the shift-curvature is the dominant of the three mechanisms. Our experiments demonstrate the significant impact of shift-curvature on test loss, and further explore the relationship between SGD noise and curvature.




Posted ContentDOI
Junjie Wei1
21 Feb 2022
TL;DR: In this paper , the authors propose a poisoning-robust private summation protocol in the multiple-server setting, recently studied in PRIO, and show that by relaxing the security constraint in SMC to a differential privacy like guarantee, one can improve over PRIO in terms of communication requirements and client-side computation.
Abstract: Computing the noisy sum of real-valued vectors is an important primitive in differentially private learning and statistics. In private federated learning applications, these vectors are held by client devices, leading to a distributed summation problem. Standard Secure Multiparty Computation (SMC) protocols for this problem are susceptible to poisoning attacks, where a client may have a large influence on the sum, without being detected. In this work, we propose a poisoning-robust private summation protocol in the multiple-server setting, recently studied in PRIO. We present a protocol for vector summation that verifies that the Euclidean norm of each contribution is approximately bounded. We show that by relaxing the security constraint in SMC to a differential privacy like guarantee, one can improve over PRIO in terms of communication requirements as well as the client-side computation. Unlike SMC algorithms that inevitably cast integers to elements of a large finite field, our algorithms work over integers/reals, which may allow for additional efficiencies.