scispace - formally typeset
M

Mehrdad Showkatbakhsh

Researcher at University of California, Los Angeles

Publications -  10
Citations -  152

Mehrdad Showkatbakhsh is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Observability & Information privacy. The author has an hindex of 5, co-authored 10 publications receiving 106 citations. Previous affiliations of Mehrdad Showkatbakhsh include University of Southern California.

Papers
More filters
Journal Article

The effect of pulsed electromagnetic fields on the acceleration of tooth movement.

TL;DR: These findings suggest that application of a PEMF can accelerate orthodontic tooth movement, and this study was designed to determine whether a pulsed electromagnetic field (PEMF) affects orthod jaw movement.
Journal ArticleDOI

Securing state reconstruction under sensor and actuator attacks: Theory and design

TL;DR: The notion of sparse strong observability is introduced and it is shown that is a necessary and sufficient condition for correctly reconstructing the state despite the considered attacks and an observer is proposed to harness the complexity of this intrinsically combinatorial problem by leveraging satisfiability modulo theory solving.
Proceedings ArticleDOI

Privacy-Utility Trade-off of Linear Regression under Random Projections and Additive Noise

TL;DR: This work considers a recently proposed notion of differential privacy that is stronger than the conventional $(varepsilon,\delta)$ -differential privacy, and uses relative objective error as the utility metric, and finds that projecting the data to a lower-dimensional subspace before adding noise attains a better trade-off in general.
Proceedings ArticleDOI

An SMT-based approach to secure state estimation under sensor and actuator attacks

TL;DR: The notion of “sparse strong observability” is introduced to characterize systems for which the state estimation is possible, given bounds on the number of attacked sensors and actuators.
Proceedings ArticleDOI

System identification in the presence of adversarial outputs

TL;DR: This work considers the problem of system identification of linear time invariant systems when some of the sensor measurements are changed by a malicious adversary, and provides a precise characterization of the equivalence relation that identifies which models cannot be distinguished in the presence of attacks.