Q2. Why is inconsistency an intrinsic characteristic in IoT data?
Due to the precision and accuracy of the sensing process and other problems including packet loss during transmission, data inconsistency is also an intrinsic characteristics in sensing data.•
Q3. Why is the data compression technique used to reduce space requirements?
Since raw RFID data contains a large amount of redundancies, RFID data compression is also applied to reduce space requirements after inference results have been obtained.
Q4. What languages can be used to add semantics to the descriptions of Web resources?
In particular, languages such as as Microformats13 and schema.org, can be used to add semantics to the descriptions of Web resources (including things).
Q5. What is the key technical challenge of a NEEL optimizer?
The key technical challenge is to selectively shed work in order to eliminate the less important query results, thereby preserving the more useful query results defined by some utility function.
Q6. How many sensors have been deployed to provide a variety of services?
Around 20,000 sensors have been deployed to provide a variety of services, such as static environmental monitoring, mobile environmental monitoring, parks and gardens irrigation, outdoor parking area management, guidance to free parking lots and traffic intensity monitoring.
Q7. What can be done to manage resources in a city?
static resources (e.g. fire stations, parking spots) and mobile resources (e.g. police cars, fire trucks) in a city can be managed effectively using IoT technologies.
Q8. What are the main types of queries that need to be considered?
In addition, new types of queries may also need to be considered, such as source selection queries for overcoming data incompleteness, and so on.
Q9. What is the main advantage of using background knowledge in CEP systems?
The usage of background knowledge about events and their relations to other concepts in the application domain can improve the expressiveness and flexibility of CEP systems.
Q10. What are the three factors to consider when designing a distributed storage system?
There are three factors or requirements to be considered when designing a distributed storage system (Chen et al., 2014):• Consistency: Consistency means to ensure that multiple copies of the same data are identical since server failuresand parallel storage may cause inconsistency.•
Q11. What is the envisioned importance of database techniques in the future?
It is envisioned that database techniques would become increasingly important in the progress of sensor network applications and energy-efficient storage.
Q12. What is the purpose of the proposed similarity functions?
The proposed new similarity functions are more accurate than existing string-based similarity functions because they aggregate evidence from multiple documents, and exploit web search engines to measure similarity.
Q13. How does the specialized engine speed up the querying performance?
It is experimentally demonstrated that specialized engines in the data warehouse, stream processing, text, and scientific databasemarkets can speed up the querying performance by 1-2 orders of magnitude using the column-store architecture.
Q14. What are the main challenges in processing high-rate data streams?
In other scenarios, such as stock market and network monitoring systems, there also exist challenges in processing high-rate data streams.
Q15. What is the main idea behind the concept of a probabilistic approach to infer from raw?
raw data streams from mobile RFID readers are considered and a probabilistic approach to translate these streams into clean, rich event streams with location information is empoyed by Tran et al. (2009).
Q16. How can the authors minimize the requirements for data collection, transmission and storage?
Data collection, transmission and storage requirements can be minimized in order to utilize low-cost and low-power components, while sufficient measurement accuracy is still maintained.
Q17. What is the problem with the data representing the world?
As pointed out by James et al. (2009) that the problem is that the world is changing fast, the data representing the world is on multiple networked computers/smart things and existing database technologies cannot manage.