What is the impact of domestic bias on airports passengers satisfaction?5 answersThe impact of domestic bias on airport passengers' satisfaction has been studied in several research papers. One study found that positive perceptions of airport physical environment attributes, such as facility functionality, layout accessibility, and cleanliness, significantly increased passengers' satisfaction and travel intention. Another study proposed a Robust Customer Satisfaction Index (RCSI) for air domestic journeys, which was less sensitive to outlier data and predicted overall passenger satisfaction based on perceived quality and value. A methodology for measuring airport service quality (ASQ) identified critical service aspects that significantly influenced passenger satisfaction, including airport appearance, signposting, and availability of bus links. Additionally, research conducted in Thailand revealed that airport facilities, wayfinding, and security were dominant dimensions influencing overall passenger satisfaction. Finally, a study conducted at Shanghai Hongqiao International Airport found that the substantive and communicative staging of the airport servicescape had a significant impact on passengers' emotional and subsequent behavioral responses, with different effects depending on passengers' travel frequency.
What is automation bias?5 answersAutomation bias refers to the tendency of humans to rely too heavily on suggestions or decisions made by automated systems, such as artificial intelligence (AI) algorithms, and to trust them even when they may be incorrect or biased. This bias can impact decision-making in various fields, including mammography reading, government administration, and international security. In the context of mammography reading, inexperienced, moderately experienced, and very experienced radiologists were found to be prone to automation bias when using an AI-based system for assistance. In the field of government administration, the use of automated decision-making systems with a "human in the loop" can lead to automation bias, where decision-makers excessively trust computers and rely less on their own judgement. Similarly, in the international security realm, research suggests that humans can be overconfident in AI and may exhibit automation bias, especially when they have lower levels of experience with AI.
How to tackle bias in AI system?5 answersTo tackle bias in AI systems, researchers have proposed various approaches. One approach is to use synthetic data to mitigate bias. Fair-GAN, a technique proposed by Patrikar, uses Generative Adversarial Networks (GAN) to synthesize data and mitigate bias. Another approach is the use of visual interactive tools like D-BIAS, proposed by Ghai and Mueller, which allow users to detect and mitigate bias in tabular datasets by refining causal models and acting on unfair causal relationships. Additionally, Sinwar et al. emphasize the need for understanding bias in AI systems and recommend employing responsible AI models for decision-making processes. Primiero suggests considering information quality dimensions such as completeness, consistency, timeliness, and reliability to improve bias mitigation tools. These approaches aim to address bias in AI systems and promote fairness, accountability, trust, and interpretability.
What are some statistics surrounding bias in AI?5 answersBias in AI is a growing concern, with 81% of technology leaders expressing a desire for government regulation in this area. Detecting and mitigating bias in AI can be challenging, as it is more abstract and unintuitive than traditional forms of discrimination. Two metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), have been proposed to quantitatively evaluate class-wise bias in multi-class classifiers. AI-based systems have the potential to impact individuals and society, raising concerns about human rights issues. Studies have shown bias in face recognition systems, crime recidivism prediction tools, and natural language processing tools. The use of automated decision-making tools is more prevalent among lower socioeconomic classes, exacerbating bias. Researchers and scholars have proposed various approaches to mitigate bias in AI, but there is still room for advancement in this area.
What is bias in machine learning?5 answersBias in machine learning refers to the presence of systematic and directional statistical bias in the predictions made by machine learning models. This bias can lead to underprediction of certain target features, particularly for members of minority groups. The underprediction for minorities arises due to statistical inference on small samples, where ML models perform inference on subsets of the training set that are similar to the new individual being classified. These subsets are typically smaller for the minority group, resulting in biased predictions. The bias prediction measure based on small-sample inference has been found to have a significant positive correlation with the observed underprediction rate in ML models. Additionally, bias in machine learning models can also stem from the biased training data that reflects the cognitive bias displayed by humans.
What is the theoretical concept of bias and variance in machine learning?5 answersThe theoretical concept of bias and variance in machine learning refers to the trade-off between underfitting and overfitting. Bias measures the error introduced by approximating a real-world problem with a simplified model, while variance measures the sensitivity of the model to fluctuations in the training data. High bias can lead to underfitting, where the model is too simple to capture the underlying patterns in the data. High variance, on the other hand, can lead to overfitting, where the model becomes too complex and fits the noise in the training data. The goal is to find the right balance between bias and variance to achieve good generalization performance.