scispace - formally typeset
Search or ask a question

Showing papers on "Static program analysis published in 2012"


Patent
29 Mar 2012
TL;DR: In this article, a computer-implemented method for combining static and dynamic code analysis may include identifying executable code that is to be analyzed to determine whether the executable code is capable of leaking sensitive data, and tracking the one or more objects identified during the static analysis.
Abstract: A computer-implemented method for combining static and dynamic code analysis may include 1) identifying executable code that is to be analyzed to determine whether the executable code is capable of leaking sensitive data, 2) performing a static analysis of the executable code to identify one or more objects which the executable code may use to transfer sensitive data, the static analysis being performed by analyzing the executable code without executing the executable code, 3) using a result of the static analysis to tune a dynamic analysis to track the one or more objects identified during the static analysis, and 4) performing the dynamic analysis by, while the executable code is being executed, tracking the one or more objects identified during the static analysis to determine whether the executable code leaks sensitive data via the one or more objects. Various other methods, systems, and computer-readable media are also disclosed.

176 citations


Journal ArticleDOI
TL;DR: An overview of the approach developed by the Software Improvement Group for code analysis and quality consulting focused on software maintainability is provided, which uses a standardized measurement model based on the ISO/IEC 9126 definition of maintainability and source code metrics.
Abstract: We provide an overview of the approach developed by the Software Improvement Group for code analysis and quality consulting focused on software maintainability. The approach uses a standardized measurement model based on the ISO/IEC 9126 definition of maintainability and source code metrics. Procedural standardization in evaluation projects further enhances the comparability of results. Individual assessments are stored in a repository that allows any system at hand to be compared to the industry-wide state of the art in code quality and maintainability. When a minimum level of software maintainability is reached, the certification body of TUV Informationstechnik GmbH issues a Trusted Product Maintainability certificate for the software product.

168 citations


Proceedings ArticleDOI
02 Jun 2012
TL;DR: GraPacc is introduced, a graph-based, pattern-oriented, context-sensitive code completion approach that is based on a database of API usage patterns and has a high level of accuracy in code completion.
Abstract: Code completion helps improve developers' programming productivity. However, the current support for code completion is limited to context-free code templates or a single method call of the variable on focus. Using software libraries for development, developers often repeat API usages for certain tasks. Thus, a code completion tool could make use of API usage patterns. In this paper, we introduce GraPacc, a graph-based, pattern-oriented, context-sensitive code completion approach that is based on a database of such patterns. GraPacc represents and manages the API usage patterns of multiple variables, methods, and control structures via graph-based models. It extracts the context-sensitive features from the code under editing, e.g. the API elements on focus and their relations to other code elements. Those features are used to search and rank the patterns that are most fitted with the current code. When a pattern is selected, the current code will be completed via a novel graph-based code completion algorithm. Empirical evaluation on several real-world systems shows that GraPacc has a high level of accuracy in code completion.

143 citations


Proceedings ArticleDOI
27 Mar 2012
TL;DR: The results of this study indicate that engineers are aware of code smells, but are not very concerned with their impact, given the low refactoring activity.
Abstract: An anti-pattern is a commonly occurring solution to a recurring problem that will typically negatively impact code quality. Code smells are considered to be symptoms of anti-patterns and occur at source code level. The lifespan of code smells in a software system can be determined by mining the software repository on which the system is stored. This provides insight into the behaviour of software developers with regard to resolving code smells and anti-patterns. In a case study, we investigate the lifespan of code smells and the refactoring behaviour of developers in seven open source systems. The results of this study indicate that engineers are aware of code smells, but are not very concerned with their impact, given the low refactoring activity.

131 citations


Proceedings ArticleDOI
15 Jul 2012
TL;DR: This work presents an approach to soundly and automatically transform many common uses of eval into other language constructs to enable sound static analysis of web applications and expands the applicability of static analysis for JavaScript web applications in general.
Abstract: A range of static analysis tools and techniques have been developed in recent years with the aim of helping JavaScript web application programmers produce code that is more robust, safe, and efficient. However, as shown in a previous large-scale study, many web applications use the JavaScript eval function to dynamically construct code from text strings in ways that obstruct existing static analyses. As a consequence, the analyses either fail to reason about the web applications or produce unsound or useless results. We present an approach to soundly and automatically transform many common uses of eval into other language constructs to enable sound static analysis of web applications. By eliminating calls to eval, we expand the applicability of static analysis for JavaScript web applications in general. The transformation we propose works by incorporating a refactoring technique into a dataflow analyzer. We report on our experimental results with a small collection of programming patterns extracted from popular web sites. Although there are inevitably cases where the transformation must give up, our technique succeeds in eliminating many nontrivial occurrences of eval.

110 citations


Proceedings ArticleDOI
27 Mar 2012
TL;DR: This paper studied the relationship between code anomalies and architecture problems in 6 software systems, which were intended to adhere different architectural decompositions and found that the refactoring strategies, even when frequently applied in those systems, did not significantly contribute to remove architecturally-relevant code anomalies.
Abstract: The longevity of evolving software systems largely depends on their resilience to architectural design degradation. It is often assumed that code anomalies are always key indicators of architecture degradation symptoms. The problem is that there is still limited knowledge about the circumstances under which code anomalies represent architectural problems. Without this knowledge, developers are not able to implement architecturally-relevant strategies for code refactoring. This paper presents an empirical study about the influence of code anomalies on architecture degradation symptoms. To this end, we studied the relationship between code anomalies and architecture problems in 6 software systems, which were intended to adhere different architectural decompositions. A total of 40 versions and 2056 code anomalies were analyzed. Our study revealed that 78% of all architecture problems in the programs were related to code anomalies. In particular, more than 33% of all architecture problems were unexpectedly induced by anomalous code elements in the systems' first version. We also found that the refactoring strategies, even when frequently applied in those systems, did not significantly contribute to remove architecturally-relevant code anomalies.

100 citations


Proceedings ArticleDOI
27 Mar 2012
TL;DR: This paper proposes a three-step approach to feature identification from source code of which the first two steps are automated.
Abstract: In order to migrate software products which are deemed similar into a product line, it is essential to identify the common features and the variations between the product variants. This can however be tedious and error-prone as it may involve browsing complex software and a lot of more or less similar variants. Fortunately, if arte facts of the product variants (source code files and/or models) are available, feature identification can be at least partially automated. In this paper, we thus propose a three-step approach to feature identification from source code of which the first two steps are automated.

95 citations


Proceedings ArticleDOI
02 Jun 2012
TL;DR: This paper proposes a simple and general technique to automatically infer semantically related words in software by leveraging the context of words in comments and code and achieves a reasonable accuracy in seven large and popular code bases written in C and Java.
Abstract: Code search is an integral part of software development and program comprehension. The difficulty of code search lies in the inability to guess the exact words used in the code. Therefore, it is crucial for keyword-based code search to expand queries with semantically related words, e.g., synonyms and abbreviations, to increase the search effectiveness. However, it is limited to rely on resources such as English dictionaries and WordNet to obtain semantically related words in software, because many words that are semantically related in software are not semantically related in English. This paper proposes a simple and general technique to automatically infer semantically related words in software by leveraging the context of words in comments and code. We achieve a reasonable accuracy in seven large and popular code bases written in C and Java. Our further evaluation against the state of art shows that our technique can achieve a higher precision and recall.

94 citations


Proceedings ArticleDOI
02 Sep 2012
TL;DR: A robust approach for extracting implementation variability from the Linux build system that works for all versions and architectures from the (git-)history of Linux.
Abstract: With more than 11,000 optional and alternative features, the Linux kernel is a highly configurable piece of software. Linux is generally perceived as a textbook example for preprocessor-based product derivation, but more than 65 percent of all features are actually handled by the build system. Hence, variability-aware static analysis tools have to take the build system into account.However, extracting variability information from the build system is difficult due to the declarative and turing-complete make language. Existing approaches based on text processing do not cover this challenges and tend to be tailored to a specific Linux version and architecture. This renders them practically unusable as a basis for variability-aware tool support -- Linux is a moving target!We describe a robust approach for extracting implementation variability from the Linux build system. Instead of extracting the variability information by a text-based analysis of all build scripts, our approach exploits the build system itself to produce this information. As our results show, our approach is robust and works for all versions and architectures from the (git-)history of Linux.

88 citations


Proceedings ArticleDOI
26 Mar 2012
TL;DR: New optimized and adaptive usages of program slicing are presented, the underlying theoretical results and the algorithm these usages rely on are provided and the method with program slicing outperforms previous combinations of static and dynamic analysis.
Abstract: Recent research proposed efficient methods for software verification combining static and dynamic analysis, where static analysis reports possible runtime errors (some of which may be false alarms) and test generation confirms or rejects them. However, test generation may time out on real-sized programs before confirming some alarms as real bugs or rejecting some others as unreachable.To overcome this problem, we propose to reduce the source code by program slicing before test generation. This paper presents new optimized and adaptive usages of program slicing, provides underlying theoretical results and the algorithm these usages rely on. The method is implemented in a tool prototype called sante (Static ANalysis and TEsting). Our experiments show that our method with program slicing outperforms previous combinations of static and dynamic analysis. Moreover, simplifying the program makes it easier to analyze detected errors and remaining alarms.

62 citations


01 Jan 2012
TL;DR: This paper proposes detecting and removing energy-wasteful code using software reengineering services, like code analysis and restructuring, to optimize the energy consumption of mobile devices.
Abstract: Due to the increasing consumer adoption of mobile devices, like smart phones and tablet PCs, saving energy is becoming more and more important. Users desire more functionality and longer battery cycles. While modern mobile computing devices offer hardware optimized for low energy consumption, applications often do not make proper use of energy-saving capabilities. This paper proposes detecting and removing energy-wasteful code using software reengineering services, like code analysis and restructuring, to optimize the energy consumption of mobile devices.

Proceedings ArticleDOI
15 Oct 2012
TL;DR: This paper lifts an existing analysis implemented in the Jakstab static analyzer to an additional dimension of location, to become sensitive to the value of the virtual program counter, and presents preliminary results for processing a virtualization-obfuscated binary.
Abstract: Virtualization-obfuscation protects a program from manual or automated analysis by compiling it into byte code for a randomized virtual architecture and attaching a corresponding interpreter. Static analysis appears to be helpless on such programs, where only the code of the interpreter is directly visible. In this paper, we explain the particular challenges for statically analyzing the combination of interpreter and byte code. Static analysis for computing possible variable values is commonly precise only to the program location. In the interpreter loop, however, this combines unrelated data flow information from different locations of the byte code program. To avoid this loss of information, we show how to lift an existing static analysis to an additional dimension of location, to become sensitive to the value of the virtual program counter. Thus, the static analysis merges data flow from equal byte code locations only. We lift an existing analysis implemented in the Jakstab static analyzer and present preliminary results for processing a virtualization-obfuscated binary.

Proceedings ArticleDOI
02 Jun 2012
TL;DR: SYMake as mentioned in this paper is a symbolic evaluation algorithm that processes makefiles and produces a symbolic dependency graph (SDG), which represents the build dependencies (i.e. rules) among files via commands.
Abstract: Build process is crucial in software development. However, the analysis support for build code is still limited. In this paper, we present SYMake, an infrastructure and tool for the analysis of build code in make. Due to the dynamic nature of make language, it is challenging to understand and maintain complex Makefiles. SYMake provides a symbolic evaluation algorithm that processes Makefiles and produces a symbolic dependency graph (SDG), which represents the build dependencies (i.e. rules) among files via commands. During the symbolic evaluation, for each resulting string value in an SDG that represents a part of a file name or a command in a rule, SYMake provides also an acyclic graph (called T-model) to represent its symbolic evaluation trace. We have used SYMake to develop algorithms and a tool 1) to detect several types of code smells and errors in Makefiles, and 2) to support build code refactoring, e.g. renaming a variable/target even if its name is fragmented and built from multiple substrings. Our empirical evaluation for SYMake's renaming on several real-world systems showed its high accuracy in entity renaming. Our controlled experiment showed that with SYMake, developers were able to understand Makefiles better and to detect more code smells as well as to perform refactoring more accurately.

Proceedings ArticleDOI
08 May 2012
TL;DR: This work proposes a method for assessing the accuracy of binary-level fault injection, and provides an extensive experimental evaluation of abinary-level technique, G-SWFIT, in order to assess its limitations in a real-world complex software system.
Abstract: The injection of software faults (i.e., bugs) by mutating the binary executable code of a program enables the experimental dependability evaluation of systems for which the source code is not available. This approach requires that programming constructs used in the source code should be identified by looking only at the binary code, since the injection is performed at this level. Unfortunately, it is a difficult task to inject faults in the binary code that correctly emulate software defects in the source code. The accuracy of binary-level software fault injection techniques is therefore a major concern for their adoption in real-world scenarios. In this work, we propose a method for assessing the accuracy of binary-level fault injection, and provide an extensive experimental evaluation of a binary-level technique, G-SWFIT, in order to assess its limitations in a real-world complex software system. We injected more than 12 thousand binary-level faults in the OS and application code of the system, and we compared them with faults injected in the source code by using the same fault types of G-SWFIT. The method was effective at highlighting the pitfalls that can occur in the implementation of G-SWFIT. Our analysis shows that G-SWFIT can achieve an improved degree of accuracy if these pitfalls are avoided.

Proceedings Article
05 Nov 2012
TL;DR: This work describes an approach to partitioning a software application (particularly a client-facing web application) into components that can be run in the public cloud and components that should remain in the private data center.
Abstract: On-demand access to computing resources as-a-service has the potential to allow enterprises to temporarily scale out of their private data center into the infrastructure of a public cloud provider during times of peak demand. However, concerns about privacy and security may limit the adoption of this technique. We describe an approach to partitioning a software application (particularly a client-facing web application) into components that can be run in the public cloud and components that should remain in the private data center. Static code analysis is used to automatically establish a partitioning based on low-effort input from the developer. Public and private versions of the application are created and deployed; at runtime, user navigation proceeds seamlessly with requests routed to the public or private data center as appropriate. We present implementations for both Java and PHP web applications, tested on sample applications.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: The paper discusses opportunities static code analysis can offer for PLC programming, it reviews techniques for static analysis, and it describes the tool that implements a rule-based analysis approach for IEC 61131-3 programs.
Abstract: Static code analysis techniques analyze programs by examining the source code without actually executing them. The main benefits lie in improving software quality by detecting potential defects and problematic code constructs in early development stages. Today, static code analysis is widely used and numerous tools are available for established programming languages like C/C++, Java, C# and others. However, in the domain of PLC programming, static code analysis tools are still rare. In this paper we present an approach and tool support for static code analysis of PLC programs. The paper discusses opportunities static code analysis can offer for PLC programming, it reviews techniques for static analysis, and it describes our tool that implements a rule-based analysis approach for IEC 61131-3 programs.

Proceedings ArticleDOI
23 Sep 2012
TL;DR: SCOOP is a tool that includes architecture-code traces in the analysis of the source code, and exploits relationships between multiple occurrences of code anomalies to detect the architecturally-relevant ones.
Abstract: Code anomalies are likely to be critical to the systems' maintainability when they are related to architectural problems. Many tools have been developed to support the identification of code anomalies. However, those tools are restricted to only analyze source code structure and identify individual anomaly occurrences. These limitations are the main reasons why state-of-art tools are often unable to identify architecturally-relevant code anomalies, i.e. those related to architectural problems. To overcome these shortcomings we propose SCOOP, a tool that includes: (i) architecture-code traces in the analysis of the source code, and (ii) exploits relationships between multiple occurrences of code anomalies to detect the architecturally-relevant ones. Our preliminary evaluation indicated that SCOOP was able to detect anomalous code elements related to 293 out of 368 architectural problems found in 3 software systems.

Proceedings ArticleDOI
03 Sep 2012
TL;DR: JStereoCode is presented, a tool that automatically identifies the stereotypes of methods and classes in Java systems and for a given Java project will classify each method and class in the system based on their stereotypes.
Abstract: Object-Oriented (OO) code stereotypes are low-level patterns that reveal the design intent of a source code artifact, such as, a method or a class. They are orthogonal to the problem domain of the software and they reflect the role of a method or class from the OO problem solving point of view. However, the research community in automated reverse engineering has focused more on higher-level design information, such as design patterns. Existing work on reverse engineering code stereotypes is scarce and focused on C++ code, while no tools are freely available as of today. We present JStereoCode, a tool that automatically identifies the stereotypes of methods and classes in Java systems. The tool is integrated with Eclipse and for a given Java project will classify each method and class in the system based on their stereotypes. Applications of JStereoCode include: program comprehension, defect prediction, etc.

Proceedings ArticleDOI
03 Sep 2012
TL;DR: This paper suggests two heuristics for improving the accuracy of existing feature location techniques when locating distinguishing features - those that are present in one product variant while absent in another while investigating code regions that have a high potential to implement a feature of interest.
Abstract: In this paper, we focus on the problem of feature location for families of related software products realized via code cloning. Locating code that corresponds to features in such families is an important task in many software development activities, such as support for sharing features between different products of the family or refactoring the code into product line representations that eliminate duplications and facilitate reuse. We suggest two heuristics for improving the accuracy of existing feature location techniques when locating distinguishing features – those that are present in one product variant while absent in another. Our heuristics are based on identifying code regions that have a high potential to implement a feature of interest. We refer to these regions as diff sets and compute them by comparing product variants to each other. We exemplify our approach on a small but realistic example and describe initial evaluation results.

Proceedings ArticleDOI
03 Sep 2012
TL;DR: This work introduces WebScent, a tool to detect embedded code smells, a type of code smells that violate important principles in software development such as software modularity and separation of concerns, and finds that the source files with more embedded code smell are likely to have more defects and scattered changes, thus potentially require more maintenance effort.
Abstract: In dynamic Web applications, there often exists a type of code smells, called embedded code smells, that violate important principles in software development such as software modularity and separation of concerns, resulting in much maintenance effort. Detecting and fixing those code smells is crucial yet challenging since the code with smells is embedded and generated from the server-side code. We introduce WebScent, a tool to detect such embedded code smells. WebScent first detects the smells in the generated code, and then locates them in the server-side code using the mapping between client-side code fragments and their embedding locations in the server program, which is captured during the generation of those fragments. Our empirical evaluation on real-world Web applications shows that 34%-81% of the tested server files contain embedded code smells. We also found that the source files with more embedded code smells are likely to have more defects and scattered changes, thus potentially require more maintenance effort.

Book ChapterDOI
12 Nov 2012
TL;DR: This work presents a novel abstraction refinement approach to automatically investigate and eliminate false positives in static analysis, and presents an implementation of the approach into the static analyzer Goanna and discusses a number of real-life experiments on larger C code projects, demonstrating that most false positives were removed.
Abstract: Static program analysis for bug detection in large C/C++ projects typically uses a high-level abstraction of the original program under investigation. As a result, so-called false positives are often inevitable, i.e., warnings that are not true bugs. In this work we present a novel abstraction refinement approach to automatically investigate and eliminate such false positives. Central to our approach is to view static analysis as a model checking problem, to iteratively compute infeasible sub-paths of infeasible paths using SMT solvers, and to refine our models by adding observer automata to exclude such paths. Based on this new framework we present an implementation of the approach into the static analyzer Goanna and discuss a number of real-life experiments on larger C code projects, demonstrating that we were able to remove most false positives automatically.

Proceedings ArticleDOI
20 Jun 2012
TL;DR: The results of evaluating multiple subsets of open source code for common software vulnerabilities using several such static security analysis tools are presented to aid other developers in better discerning which tools to use in evaluating their own programs for security vulnerabilities.
Abstract: Software vulnerabilities present a significant impediment to the safe operation of many computer applications, both proprietary and open source. Fortunately, many static analysis tools exist to identify potential security issues. We present the results of evaluating multiple subsets of open source code for common software vulnerabilities using several such static security analysis tools. These results aid other developers in better discerning which tools to use in evaluating their own programs for security vulnerabilities.

Journal ArticleDOI
TL;DR: This article discusses and compares max-str strategy and min-strategy improvement algorithms for static program analysis and indicates how the general setting can be instantiated for inferring numerical invariants of programs based on non-linear templates.

Proceedings ArticleDOI
02 Jun 2012
TL;DR: GraPacc is introduced, an advanced, context-sensitive code completion tool that is based on frequent API usage patterns that extracts the context- sensitive features from the code under editing and auto-completes the current code with the proper elements according to the chosen pattern.
Abstract: Code completion tool plays an important role in daily development activities. It helps developers by auto-completing tedious and detailed code during an editing session. However, existing code completion tools are limited to recommending only context-free code templates and a single method call of the variable under editing. We introduce GraPacc, an advanced, context-sensitive code completion tool that is based on frequent API usage patterns. It extracts the context-sensitive features from the code under editing, for example, the API elements on focus and the current editing point, and their relations to other code elements. It then ranks the relevant API usage patterns and auto-completes the current code with the proper elements according to the chosen pattern.

Proceedings ArticleDOI
07 Oct 2012
TL;DR: The XEMU framework for mutation based testing of embedded software binaries is presented, which applies an extension of the QEMU software emulator, which injects mutations at run-time by dynamic code translation without affecting the binary software under test.
Abstract: This paper presents the XEMU framework for mutation based testing of embedded software binaries. We apply an extension of the QEMU software emulator, which injects mutations at run-time by dynamic code translation without affecting the binary software under test. The injection is based on a mutation table, which is generated by control flow graph (CFG) analysis of the disassembled code prior to its execution without presuming access to source code. We introduce our approach by the example of the ARM instruction set architecture for which a mutation taxonomy is presented. In addition to extending the testing scope to target specific low level faults, XEMU addresses the reduction of the mutants creation, execution, and detection overheads. Moreover, we reduce testing efforts by applying binary CFG analysis and constraint-based test generation for improved test quality. The experimental results of a car motor management software show significant improvements over conventional source code based approaches while providing 100% accuracy in terms of the computed test quality metrics.

Patent
03 Jul 2012
TL;DR: In this paper, a source code analytic platform may use a combination of information retrieval and program analysis techniques to develop a code relationship graph 514 to perform various code applications, such as intent-based searches on source code set, the documentation of undocumented code, risk analyses, natural language semantic searches, and others.
Abstract: In one embodiment, a code analytic platform may use a novel combination of information retrieval and program analysis techniques to develop a code relationship graph 514 to perform various code applications, such as intent based searches on a source code set, the documentation of undocumented code, risk analyses, natural language semantic searches, and others. A source code analytics platform may perform a code analysis of a source code set 410. The source code analytics platform may perform a metadata analysis of a code production data set 430 associated with the source code set 410. The source code analytics platform may create a code relationship graph 514 associating the source code set 410 with a descriptive metadata set.

Book ChapterDOI
03 Sep 2012
TL;DR: Today’s businesses are inherently process-driven, and the costs—with respect to computational effort at runtime as well as financial costs—for operating business-process driven systems increase steadily.
Abstract: Today’s businesses are inherently process-driven. Conseque- ntly, the use of business-process driven systems, usually implemented on top of service-oriented or cloud-based infrastructures, is increasing. At the same time, the demand on the security, privacy, and compliance of such systems is increasing as well. As a result, the costs—with respect to computational effort at runtime as well as financial costs—for operating business-process driven systems increase steadily.

Patent
19 Apr 2012
TL;DR: A method for code analysis comprising steps of inputting program code to an analyzer, assigning an objective quality measure to components of the analyzed code; and displaying graphically the objective quality measures is presented in this paper.
Abstract: A method for code analysis comprising steps of inputting program code to an analyzer, assigning an objective quality measure to components of the analyzed code; and displaying graphically the objective quality measures.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: An entropy based bug prediction approach using support vector regression (SVR) is proposed using conventional simple linear regression (SLR) method and found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.
Abstract: Predicting software defects is one of the key areas of research in software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches namely code churn, past bugs, refactoring, number of authors, file size and age, etc by measuring the performance in terms of accuracy and complexity. Different mathematical models have also been developed in the literature to monitor the bug occurrence and fixing process. These existing mathematical models named software reliability growth models are either calendar time or testing effort dependent. The occurrence of bugs in the software is mainly due to the continuous changes in the software code. The continuous changes in the software code make the code complex. The complexity of the code changes have already been quantified in terms of entropy as follows in Hassan [9]. In the available literature, few authors have proposed entropy based bug prediction using conventional simple linear regression (SLR) method. In this paper, we have proposed an entropy based bug prediction approach using support vector regression (SVR). We have compared the results of proposed models with the existing one in the literature and have found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.

Patent
Yi Zhang, Chao Zhang, Zhihui Du, Liankun Liu, Teng Ma 
04 Jul 2012
TL;DR: In this paper, a one-code multi-recognition method of two-dimensional codes is proposed, which is characterized by comprising the following steps: a service parameter i is arranged in a 2D code recognition program and a client side of the service parameter is a client-side i; the client side i enables the acquired twodimensional code content and the service parameters i to form a submission URL (Uniform Resource Locator) and submits the submission URL to a twoD analysis platform in a GET manner.
Abstract: The invention relates to a one-code multi-recognition method of two-dimensional codes. The one-code multi-recognition method is technically characterized by comprising the following steps: a service parameter i is arranged in a two-dimensional code recognition program and a client side of the service parameter i is a client side i; the client side i enables the acquired two-dimensional code content and the service parameter i to form a submission URL (Uniform Resource Locator) and submits the submission URL to a two-dimensional analysis platform in a GET manner; the two-dimensional analysis platform carries out matched search in a two-dimensional code analysis database according to the two-dimensional content and the service parameter i, and then returns the searched database record to the two-dimensional analysis platform; the two-dimensional analysis platform enables the database record and the service parameter i to form an information display URL with the service parameter i; and the client side i opens the information display URL by a browser and displays related service information with an internal parameter being i. The one-code multi-recognition method has the advantages that different kinds of service information by different crowds can be viewed, the flexibility for application of the two-dimensional codes is greatly improved and one service corresponds to one kind of display information.