Feature location in source code: a taxonomy and survey
read more
Citations
Software development in startup companies: A systematic mapping study
Improving bug localization using structured information retrieval
How to effectively use topic models for software engineering tasks? an approach based on genetic algorithms
A survey of code‐based change impact analysis techniques
Automatic query reformulations for text retrieval in software engineering
References
Latent dirichlet allocation
Latent Dirichlet Allocation
The anatomy of a large-scale hypertextual Web search engine
The Anatomy of a Large-Scale Hypertextual Web Search Engine.
Indexing by Latent Semantic Analysis
Related Papers (5)
Frequently Asked Questions (11)
Q2. What future works have the authors mentioned in the paper "Feature location in source code: a taxonomy and survey" ?
The taxonomy facilitates the comparison of existing feature location techniques and illuminates possible areas of future research.
Q3. What was used to assess task difficulty?
The NASA Talk Load Index (TLX) was used to assess task difficulty, and distance profiles were used to gauge the degree to which the participants remained on-task.
Q4. What is the way to evaluate a feature location approach?
Another way to evaluate a feature location approach is to have system experts or even non-experts assess the results, which is an evaluation method often used by IR-based search engines.
Q5. What are the approaches to establish the mapping between the description of a feature and the source code?
The approaches to establish the mapping between the description of the feature and the source code include textual search with grep [Petrenko'08], Information Retrieval [Cleary'09, Gay'09, Marcus'04, Poshyvanyk'07b], and natural language processing [Hill'09, Shepherd'07].
Q6. What is the importance of feature location in software maintenance?
No maintenance activity can be completed without first locating the code that is relevant to the task at hand, making feature location essential to software maintenance since it is performed in the context of incremental change.
Q7. Why is the granularity of the input program elements more fine grained than other?
Because the granularity of the input program elements is more fine grained (i.e., variables), the results are also more fine grained than other FLTs.
Q8. Why are there no papers that fall into the categories survey and experiment?
due to the fact that the feature location field is not as matured as other software engineering fields, there are no papers that fall into the categories survey and experiment.
Q9. What did the authors do to improve their taxonomy and attribute set?
Through thisCRC to Journal of Software Maintenance and Evolution: Research and Practiceprocess the authors were able to improve the quality of their taxonomy and attribute set as well as improve their descriptions.
Q10. What types of datasets could be used to evaluate a feature?
These datasets could contain a list of features, textual description or documentation about the features, mappings between features or bugs and program elements that are relevant to fixing the bug or implementing the feature (referred to as gold sets in the literature), patches submitted to an issue tracker, etc.
Q11. How many functions were able to partially comprehend?
The results show that among the 984 functions in Mosaic, the developer performing concept location on a maintenance task was able to partially comprehend the system by investigating only 22 (2%) of the functions.