scispace - formally typeset
Search or ask a question
Author

Makoto Okazaki

Bio: Makoto Okazaki is an academic researcher from University of Tokyo. The author has contributed to research in topics: Absorption spectroscopy & Extended X-ray absorption fine structure. The author has an hindex of 14, co-authored 27 publications receiving 4914 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the energy levels for ionized homopolar pairs of impurities in n-type and p-type silicon under magnetic and stress fields are calculated with the help of effective mass theoretical approaches.
Abstract: With the help of the effective mass theoretical approaches, energy levels are calculated for ionized homopolar pairs of impurities in n -type and p -type silicon under magnetic and stress fields. Two impurity ions which constitute the pair are assumed to lie along the (001) directions of the crystal. The microwave double resonance experiment recently performed by Tanaka et al. is analyzed with the use of the obtained level schemes. Brief discussions on transition probabilities and relaxation times are added.

4 citations

Journal ArticleDOI
TL;DR: In this article, the electronic density of states of amorphous Si is investigated with the use of the self-consistent cluster theory developed in the preceding paper, and the E -k relations and local densities of states are calculated for 1-, 2-and 8-atom cluster cases.
Abstract: Electronic density of states of amorphous Si is investigated with the use of the self-consistent cluster theory developed in the preceding paper. E - k relations and local densities of states are calculated for 1-, 2- and 8-atom cluster cases. Effect of local orders is elucidated by comparing the results in these three cases. Density of states with pseudogap at the Fermi energy is obtained for 8-atom cluster case, in which the two central atoms are in the tetrahedral environment. By using the s p 3 hybridized orbitals the structure in state density is analyzed and the origin of the pseudogap is clarified to be the bonding-anti-bonding splitting of the hybridized orbitals.

3 citations

Journal ArticleDOI
TL;DR: In this paper, a method for calculating the electronic density of states of the system with local order was presented, where the local order is taken into account by the atomic configuration in a cluster and the potential due to the surrounding medium is assumed to be energy-dependent complex constant.
Abstract: A method for calculating the electronic density of states of the system with local order is presented. For the system, we set up a model of cluster of atoms embedded in the effective medium. The local order is taken into account by the atomic configuration in a cluster. The potential due to the surrounding medium is assumed to be energy-dependent complex constant. Value of the potential is determined self-consistently from the condition that the system gives rise no averaged forward-scattering. This condition provides us E - k relation of the system. An expression of the local density of states is derived.

3 citations

01 Jan 2009
TL;DR: In this paper, the authors proposed an event notification system that monitors tweet (Twitter messages) and delivers semantically relevant tweets if they meet a user's information needs, which can detect an earthquake by monitoring tweets before an earthquake actually arrives.
Abstract: Twitter, a popular microblog service, has received much attention recently. An important characteristic of Twitter is its real-time nature. However, to date, integration of semantic processing and the real-time nature of Twitter has not been well studied. As described herein, we propose an event notification system that monitors tweet (Twitter messages) and delivers semantically relevant tweets if they meet a user's information needs. As an example, we construct an earthquake prediction system targeting Japanese tweets. Because of numerous earthquakes in Japan and because of the vast number of Twitter users throughout the country, it is sometimes possible to detect an earthquake by monitoring tweets before an earthquake actually arrives. (An earthquake is transmitted through the earth's crust at about 3-7 km/s. Consequently, a person has about 20 s before its arrival at a point that is 100 km distant.) Other examples are detection of rainbows in the sky, and detection of traffic jams in cities. We first prepare training data and apply a support vector machine to classify a tweet into positive and negative classes, which corresponds to the detection of a target event. Features for the classification are constructed using the keywords in a tweet, the number of words, the context of event words, and so on. In the evaluation, we demonstrate that every recent large earthquake has been detected by our system. Actually, notification is delivered much faster than the announcements broadcast by the Japan Meteorological Agency.

2 citations


Cited by
More filters
Proceedings ArticleDOI
28 Mar 2011
TL;DR: There are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.
Abstract: We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally.On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to "trending" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting ("re-tweeting") behavior, from the text of the posts, and from citations to external sources.We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

2,123 citations

Proceedings ArticleDOI
26 Oct 2010
TL;DR: A probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, which can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on.
Abstract: We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate The system estimates k possible locations for each user in descending order of confidence On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51% of Twitter users within 100 miles of their actual location

1,213 citations

Journal ArticleDOI
04 May 2011-PLOS ONE
TL;DR: The use of information embedded in the Twitter stream is examined to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity.
Abstract: Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's “tweets,” or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels.

1,195 citations

Book ChapterDOI
18 Apr 2011
TL;DR: This paper empirically compare the content of Twitter with a traditional news medium, New York Times, using unsupervised topic modeling, and finds interesting and useful findings for downstream IR or DM applications.
Abstract: Twitter as a new form of social media can potentially contain much useful information, but content analysis on Twitter has not been well studied. In particular, it is not clear whether as an information source Twitter can be simply regarded as a faster news feed that covers mostly the same information as traditional news media. In This paper we empirically compare the content of Twitter with a traditional news medium, New York Times, using unsupervised topic modeling. We use a Twitter-LDA model to discover topics from a representative sample of the entire Twitter. We then use text mining techniques to compare these Twitter topics with topics from New York Times, taking into consideration topic categories and types. We also study the relation between the proportions of opinionated tweets and retweets and topic categories and types. Our comparisons show interesting and useful findings for downstream IR or DM applications.

1,193 citations

Journal ArticleDOI
TL;DR: This article deconstructs the ideological grounds of datafication, a ideology rooted in problematic ontological and epistemological claims that shows characteristics of a widespread secular belief in the context of a larger social media logic.
Abstract: Metadata and data have become a regular currency for citizens to pay for their communication services and security—a trade-off that has nestled into the comfort zone of most people. This article deconstructs the ideological grounds of datafication. Datafication is rooted in problematic ontological and epistemological claims. As part of a larger social media logic, it shows characteristics of a widespread secular belief. Dataism, as this conviction is called, is so successful because masses of people — naively or unwittingly — trust their personal information to corporate platforms. The notion of trust becomes more problematic because people’s faith is extended to other public institutions (e.g. academic research and law enforcement) that handle their (meta)data. The interlocking of government, business, and academia in the adaptation of this ideology makes us want to look more critically at the entire ecosystem of connective media.

1,076 citations