scispace - formally typeset
Search or ask a question

Showing papers on "Heuristic published in 2017"


Journal ArticleDOI
TL;DR: In this paper, the authors employ the dynamic programming (DP) to locate the optimal actions for the engine in PHEVs, and propose a recalibration method to improve the performance of the rule-based energy management through the results calculated by DP algorithm.

471 citations


Posted Content
TL;DR: In this paper, a combination of reinforcement learning and graph embedding is proposed to learn heuristics for combinatorial optimization problems over graphs, such as Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.

455 citations


Journal ArticleDOI
TL;DR: This work proposes a new set of 100 instances ranging from 100 to 1000 customers, designed in order to provide a more comprehensive and balanced experimental setting, and reports an analysis on state-of-the-art exact and heuristic methods.

314 citations


Journal ArticleDOI
TL;DR: An online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time is proposed.
Abstract: With mobile devices increasingly able to connect to cloud servers from anywhere, resource-constrained devices can potentially perform offloading of computational tasks to either save local resource usage or improve performance. It is of interest to find optimal assignments of tasks to local and remote devices that can take into account the application-specific profile, availability of computational resources, and link connectivity, and find a balance between energy consumption costs of mobile devices and latency for delay-sensitive applications. We formulate an NP-hard problem to minimize the application latency while meeting prescribed resource utilization constraints. Different from most of existing works that either rely on the integer programming solver, or on heuristics that offer no theoretical performance guarantees, we propose Hermes, a novel fully polynomial time approximation scheme (FPTAS). We identify for a subset of problem instances, where the application task graphs can be described as serial trees, Hermes provides a solution with latency no more than $(1+\epsilon)$ times of the minimum while incurring complexity that is polynomial in problem size and $\frac{1}{\epsilon}$ . We further propose an online algorithm to learn the unknown dynamic environment and guarantee that the performance gap compared to the optimal strategy is bounded by a logarithmic function with time. Evaluation is done by using real data set collected from several benchmarks, and is shown that Hermes improves the latency by $16$ percent compared to a previously published heuristic and increases CPU computing time by only $0.4$ percent of overall latency.

233 citations


Journal ArticleDOI
TL;DR: The proposed algorithm uses the advantages of evolutionary genetic algorithm along with heuristic approaches and outperformed the makespans of the three well-known heuristic algorithms and also the execution time of the recently meta-heuristics algorithm.

221 citations


Proceedings Article
01 Dec 2017
TL;DR: In this article, a generative adversarial approach is proposed to learn a sequence model over user-specified transformation functions using GANs, which can make use of arbitrary, non-deterministic transformation functions, and is robust to misspecified user input.
Abstract: Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches.

206 citations


Journal ArticleDOI
01 Feb 2017
TL;DR: A non-dominance sort based Hybrid Particle Swarm Optimization (HPSO) algorithm to handle the workflow scheduling problem with multiple conflicting objective functions on IaaS clouds and the performance of proposed heuristic is compared with state-of-art multi-objective meta-heuristics.
Abstract: Now-a-days, Cloud computing is a technology which eludes provision cost while providing scalability and elasticity to accessible resources on a pay-per-use basis. To satisfy the increasing demand of the computing power to execute large scale scientific workflow applications, workflow scheduling is the main challenging issue in Infrastructure-as-a-Service (IaaS) clouds. As workflow scheduling belongs to NP-complete problem, so, meta-heuristic approaches are more preferred option. Users often specified deadline and budget constraint for scheduling these workflow applications over cloud resources. But these constraints are in conflict with each other, i.e., the cheaper resources are slow as compared to the expensive resources. Most of the existing studies try to optimize only one of the objectives, i.e., either time minimization or cost minimization under user specified Quality of Service (QoS) constraints. But due to the complexity of workflows and dynamic nature of cloud, a trade-off solution is required to make a balance between execution time and processing cost. To address these issues, this paper presents a non-dominance sort based Hybrid Particle Swarm Optimization (HPSO) algorithm to handle the workflow scheduling problem with multiple conflicting objective functions on IaaS clouds. The proposed algorithm is a hybrid of our previously proposed Budget and Deadline constrained Heterogeneous Earliest Finish Time (BDHEFT) algorithm and multi-objective PSO. The HPSO heuristic tries to optimize two conflicting objectives, namely, makespan and cost under the deadline and budget constraints. Along with these two conflicting objectives, energy consumed of created workflow schedule is also minimized. The proposed algorithm gives a set of Pareto Optimal solutions from which the user can choose the best solution. The performance of proposed heuristic is compared with state-of-art multi-objective meta-heuristics like NSGA-II, MOPSO, and e -FDPSO. The simulation analysis substantiates that the solutions obtained with proposed heuristic deliver better convergence and uniform spacing among the solutions as compared to others. Hence it is applicable to solve a wide class of multi-objective optimization problems for scheduling scientific workflows over IaaS clouds.

181 citations


Posted Content
TL;DR: This work proposes to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model, which results in using surprisal as intrinsic motivation.
Abstract: Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as $\epsilon$-greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent's surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the $k$-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.

168 citations


Journal ArticleDOI
01 Apr 2017
TL;DR: A simulated annealing (SA) heuristic is proposed to solve the hybrid vehicle routing problem (HVRP), which is an extension of the Green Vehicle Routing Problem (G-VRP) and results show that the proposed SA effectively solves HVRP.
Abstract: Display Omitted This research proposes the hybrid vehicle routing problem (HVRP), which is an extension of the green vehicle routing problem.A simulated annealing (SA) heuristic is proposed to solve HVRP.Computational results show that the proposed SA effectively solves HVRP.Sensitivity analysis has been conducted to understand the effect of hybrid vehicles and charging stations on the travel cost. This study proposes the Hybrid Vehicle Routing Problem (HVRP), which is an extension of the Green Vehicle Routing Problem (G-VRP). We focus on vehicles that use a hybrid power source, known as the Plug-in Hybrid Electric Vehicle (PHEV) and generate a mathematical model to minimize the total cost of travel by driving PHEV. Moreover, the model considers the utilization of electric and fuel power depending on the availability of either electric charging or fuel stations.We develop simulated annealing with a restart strategy (SA_RS) to solve this problem, and it consists of two versions. The first version determines the acceptance probability of a worse solution using the Boltzmann function, denoted as SA_RSBF. The second version employs the Cauchy function to determine the acceptance probability of a worse solution, denoted as SA_RSCF. The proposed SA algorithm is first verified with benchmark data of the capacitated vehicle routing problem (CVRP), with the result showing that it performs well and confirms its efficiency in solving CVRP. Further analysis show that SA_RSCF is preferable compared to SA_RSBF and that SA with a restart strategy performs better than without a restart strategy. We next utilize the SA_RSCF method to solve HVRP. The numerical experiment presents that vehicle type and the number of electric charging stations have an impact on the total travel cost.

158 citations


Journal ArticleDOI
TL;DR: A heuristic approach to controlling the iteration number in the fusion process of a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures is proposed.
Abstract: Learning an ideal metric is crucial to many tasks in computer vision. Diverse feature representations may combat this problem from different aspects; as visual data objects described by multiple features can be decomposed into multiple views, thus often provide complementary information. In this paper, we propose a cross-view fusion algorithm that leads to a similarity metric for multiview data by systematically fusing multiple similarity measures. Unlike existing paradigms, we focus on learning distance measure by exploiting a graph structure of data samples, where an input similarity matrix can be improved through a propagation of graph random walk. In particular, we construct multiple graphs with each one corresponding to an individual view, and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics. Our method is scalable to a large amount of data by enforcing sparsity through an anchor graph representation. To adaptively control the effects of different views, we dynamically learn view-specific coefficients, which are leveraged into graph random walk to balance multiviews. However, such a strategy may lead to an over-smooth similarity metric where affinities between dissimilar samples may be enlarged by excessively conducting cross-view fusion. Thus, we figure out a heuristic approach to controlling the iteration number in the fusion process in order to avoid over smoothness. Extensive experiments conducted on real-world data sets validate the effectiveness and efficiency of our approach.

145 citations


Journal ArticleDOI
TL;DR: This paper identifies and exhaustively compare the best existing heuristics and metaheuristics so the state-of-the-art regarding approximate procedures for this relevant problem is established.

Journal ArticleDOI
TL;DR: CoFIM is proposed, a community-based framework for influence maximization on large-scale networks that derives a simple evaluation form of the total influence spread which is submodular and can be efficiently computed and a fast algorithm to select the seed nodes.
Abstract: Influence maximization is a classic optimization problem studied in the area of social network analysis and viral marketing. Given a network, it is defined as the problem of finding k seed nodes so that the influence spread of the network can be optimized. Kempe et al. have proved that this problem is NP hard and the objective function is submodular, based on which a greedy algorithm was proposed to give a near-optimal solution. However, this simple greedy algorithm is time consuming, which limits its application on large-scale networks. Heuristic algorithms generally cannot provide any performance guarantee. To solve this problem, in this paper we propose CoFIM, a community-based framework for influence maximization on large-scale networks. In our framework the influence propagation process is divided into two phases: (i) seeds expansion; and (ii) intra-community propagation. The first phase is the expansion of seed nodes among different communities at the beginning of diffusion. The second phase is the influence propagation within communities which are independent of each other. Based on the framework, we derive a simple evaluation form of the total influence spread which is submodular and can be efficiently computed. Then we further propose a fast algorithm to select the seed nodes. Experimental results on synthetic and nine real-world large datasets including networks with millions of nodes and hundreds of millions of edges show that our algorithm achieves competitive results in influence spread as compared with state-of-the-art algorithms and it is much more efficient in terms of both time and memory usage.

Journal ArticleDOI
TL;DR: Experimental results show that compared with traditional algorithms, the performance of ProLiS is very competitive and L-ACO performs the best in terms of execution costs and success ratios of meeting deadlines.
Abstract: Nowadays it is becoming more and more attractive to execute workflow applications in the cloud because it enables workflow applications to use computing resources on demand. Meanwhile, it also challenges traditional workflow scheduling algorithms that only concentrate on optimizing the execution time. This paper investigates how to minimize execution cost of a workflow in clouds under a deadline constraint and proposes a metaheuristic algorithm L-ACO as well as a simple heuristic ProLiS. ProLiS distributes the deadline to each task, proportionally to a novel definition of probabilistic upward rank, and follows a two-step list scheduling methodology: rank tasks and sequentially allocates each task a service which meets the sub-deadline and minimizes the cost. L-ACO employs ant colony optimization to carry out deadline-constrained cost optimization: the ant constructs an ordered task list according to the pheromone trail and probabilistic upward rank, and uses the same deadline distribution and service selection methods as ProLiS to build solutions. Moreover, the deadline is relaxed to guide the search of L-ACO towards constrained optimization. Experimental results show that compared with traditional algorithms, the performance of ProLiS is very competitive and L-ACO performs the best in terms of execution costs and success ratios of meeting deadlines.

Journal ArticleDOI
TL;DR: An approach named as Density Based Controller Placement (DBCP), which uses a density-based switch clustering algorithm to split the network into several sub-networks and provides better performance than the state-of-the-art approaches in terms of time consumption, propagation latency, and fault tolerance.

Journal ArticleDOI
TL;DR: A number of algorithmic improvements implemented in the AGLIBRARY optimization solver are presented in order to improve the possibility of finding good quality solutions quickly and often outperform a state-of-the-art tabu search algorithm and a commercial solver in terms of reduced computation times and/or train delays.

Journal ArticleDOI
TL;DR: The production planning problem in additive manufacturing and 3D printing is introduced for the first time in the literature and a mathematical model to formulate it is developed and coded in CPLEX and two different heuristic procedures, namely best-fit and adapted best- fit rules, are developed in JavaScript.

Journal ArticleDOI
TL;DR: A metaheuristic for the Time-Dependent Pollution-Routing Problem, which consists of routing a number of vehicles to serve a set of customers and determining their speed on each route segment with the objective of minimizing the cost of driver’s wage and greenhouse gases emissions, is proposed.

Journal ArticleDOI
TL;DR: In this article, the authors define the standard LRP as a deterministic, static, discrete, single-echelon, singleobjective location-routing problem in which each customer (vertex) must be visited exactly once for the delivery of a good from a facility, and in which no inventory decisions are relevant.
Abstract: In this paper, we define the standard LRP as a deterministic, static, discrete, single-echelon, single-objective location-routing problem in which each customer (vertex) must be visited exactly once for the delivery of a good from a facility, and in which no inventory decisions are relevant. We review the literature on the standard LRP published since the survey by Nagy and Salhi appeared in 2006. We provide concise paper excerpts that convey the central ideas of each work, discuss recent developments in the field, provide a numerical comparison of the most successful heuristic algorithms, and list promising topics for further research.

Journal ArticleDOI
01 Aug 2017
TL;DR: Fodina is presented, a process discovery technique with a strong focus on robustness and flexibility which is shown to be better performing in terms of process model quality, adds the ability to mine duplicate tasks, and allows for flexible configuration options.
Abstract: In this paper, we present Fodina, a process discovery technique with a strong focus on robustness and flexibility. To do so, we improve upon and extend an existing process discovery algorithm, namely Heuristics Miner. We have identified several drawbacks which impact the reliability of existing heuristic-based process discovery techniques and therefore propose a new algorithm which is shown to be better performing in terms of process model quality, adds the ability to mine duplicate tasks, and allows for flexible configuration options.

Journal ArticleDOI
TL;DR: It is shown that large sized problems possessing essential 3V's of big data, i.e., volume, variety and velocity consume non-polynomial time and cannot be solved optimally, so a heuristic (H-1) is also proposed to solve the largesized problems involving big data.

Journal ArticleDOI
03 May 2017-PLOS ONE
TL;DR: Six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput.
Abstract: Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

Posted Content
TL;DR: In this article, a generative adversarial approach is proposed to learn a sequence model over user-specified transformation functions using GANs, which can make use of arbitrary, non-deterministic transformation functions, and is robust to misspecified user input.
Abstract: Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches.

Proceedings Article
10 Feb 2017
TL;DR: It is shown that allocation of chores and classical allocation of goods have some fundamental connections but also differences which prevent a straightforward application of algorithms for goods in the chores setting and viceversa, and a new fairness concept called optimal MmS is introduced that represents the best possible allocation in terms of MmM that is guaranteed to exist.
Abstract: We consider Max-min Share (MmS) fair allocations of indivisible chores (items with negative utilities). We show that allocation of chores and classical allocation of goods (items with positive utilities) have some fundamental connections but also differences which prevent a straightforward application of algorithms for goods in the chores setting and viceversa. We prove that an MmS allocation does not need to exist for chores and computing an MmS allocation - if it exists - is strongly NP-hard. In view of these non-existence and complexity results, we present a polynomial-time 2-approximation algorithm for MmS fairness for chores. We then introduce a new fairness concept called optimal MmS that represents the best possible allocation in terms of MmS that is guaranteed to exist. We use connections to parallel machine scheduling to give (1) a polynomial-time approximation scheme for computing an optimal MmS allocation when the number of agents is fixed and (2) an effective and efficient heuristic with an ex-post worst-case analysis.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new string SMT solver is presented that is faster than its competitors Z3str2, Norn, CVC4, S3, and S3P over a majority of three industrial-strength benchmarks, namely, Kaluza, PISA, and IBM AppScan.
Abstract: We present a new string SMT solver, Z3str3, that is faster than its competitors Z3str2, Norn, CVC4, S3, and S3P over a majority of three industrial-strength benchmarks, namely, Kaluza, PISA, and IBM AppScan. Z3str3 supports string equations, linear arithmetic over length function, and regular language membership predicate. The key algorithmic innovation behind the efficiency of Z3str3 is a technique we call theory-aware branching, wherein we modify Z3's branching heuristic to take into account the structure of theory literals to compute branching activities. In the traditional DPLL(T) architecture, the structure of theory literals is hidden from the DPLL(T) SAT solver because of the Boolean abstraction constructed over the input theory formula. By contrast, the theory-aware technique presented in this paper exposes the structure of theory literals to the DPLL(T) SAT solver's branching heuristic, thus enabling it to make much smarter decisions during its search than otherwise. As a consequence, Z3str3 has better performance than its competitors.

Journal ArticleDOI
TL;DR: A composite scenario tree is proposed that captures both types of uncertainty, and its unique structure is exploited to derive new theoretical properties that can drastically reduce the number of non-anticipativity constraints (NACs).

Journal ArticleDOI
TL;DR: This paper addresses the Distributed Permutation Flowshop Scheduling Problem (DPFSP) with an artificial chemical reaction metaheuristic which objective is to minimize the maximum completion time and proves the efficiency of the proposed algorithm in comparison with some powerful algorithms.

Journal ArticleDOI
TL;DR: A genetic-based algorithm as a meta-heuristic method to address static task scheduling for processors in heterogeneous computing systems and improves the performance of genetic algorithm through significant changes in its genetic functions and introduction of new operators that guarantee sample variety and consistent coverage of the whole space.

Posted Content
TL;DR: Inspired by recent achievements of deep reinforcement learning (DRL) techniques, especially Pointer Network, on combinatorial optimization problems such as TSP, a DRL-based method is applied to optimize the sequence of items to be packed into the bin.
Abstract: In this paper, a new type of 3D bin packing problem (BPP) is proposed, in which a number of cuboid-shaped items must be put into a bin one by one orthogonally. The objective is to find a way to place these items that can minimize the surface area of the bin. This problem is based on the fact that there is no fixed-sized bin in many real business scenarios and the cost of a bin is proportional to its surface area. Our research shows that this problem is NP-hard. Based on previous research on 3D BPP, the surface area is determined by the sequence, spatial locations and orientations of items. Among these factors, the sequence of items plays a key role in minimizing the surface area. Inspired by recent achievements of deep reinforcement learning (DRL) techniques, especially Pointer Network, on combinatorial optimization problems such as TSP, a DRL-based method is applied to optimize the sequence of items to be packed into the bin. Numerical results show that the method proposed in this paper achieve about 5% improvement than heuristic method.

Journal ArticleDOI
TL;DR: Substantial experiments on real-life datasets show that the proposed algorithm outperforms the other heuristic algorithms for mining HUIs in terms of the number of discovered HUI, and convergence.
Abstract: High-utility itemset mining (HUIM) is a major contemporary data mining issue. It is different from frequent itemset mining (FIM), which only considers the frequency factor. HUIM applies both the quantity and profit factors to be used to reveal the most profitable products. Several previous approaches have been proposed to mine high-utility itemsets (HUIs) and most of them have to handle the exponential search space for discovering HUIs when the number of distinct items and the size of the database are both very large. Therefore, two evolutionary computation (EC) techniques, genetic algorithm (GA) and particle swarm optimization (PSO), were previously proposed to mine HUIs. In these studies, GAs and PSOs also could obtain the huge amount of high-utility items in a limitation time. In this paper, a novel algorithm based on the other evolutionary computation technique, ant colony optimization (ACO), is proposed to resolve this issue. Unlike GAs and PSOs, ACOs produce a feasible solution in a constructive way. They can avoid generating unreasonable solutions as much as possible. Thus, a well-defined ACO approach can always obtain suitable solutions efficiently. An ant colony system (ACS), which is extended from ACO and consists of high-utility itemset mining by ACS (HUIM-ACS), is proposed to efficiently find HUIs. In general, an EC algorithm cannot make sure the provided solution is the global optimal solution. But the designed HUIM-ACS algorithm maps the completed solution space into the routing graph and includes two pruning processes. Therefore, it guarantees that it obtains all of the HUIs when there is no candidate edge from the starting point. In addition, HUIM-ACS does not estimate the same feasible solution again in its process in order to avoid wasting computational resource. Substantial experiments on real-life datasets show that the proposed algorithm outperforms the other heuristic algorithms for mining HUIs in terms of the number of discovered HUIs, and convergence.

Journal ArticleDOI
TL;DR: This work shows that pairs of individuals making group decisions meet this challenge by using a heuristic strategy that they call ‘confidence matching’: they match their communicated confidence so that certainty and uncertainty is stated in approximately equal measure by each party.
Abstract: Bang et al use behavioural data in culturally distinct settings (United Kingdom and Iran) and computational modelling to show that, when making decisions in pairs, people adopt a confidence-matching heuristic to combine their opinions