In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.
The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case. This distinguishes them from heuristics such as annealing or genetic algorithms, which find reasonably good solutions on some inputs, but provide no clear indication at the outset on when they may succeed or fail.
There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems. For example, one of the long-standing open questions in computer science is to determine whether there is an algorithm that outperforms the 2-approximation for the Steiner Forest problem by Agrawal et al. The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems. One well-known example of the former is the Goemans–Williamson algorithm for maximum cut, which solves a graph theoretic problem using high dimensional geometry.
Introduction
A simple example of an approximation algorithm is one for the minimum vertex cover problem, where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex. One way to find a vertex cover is to repeat the following process: find an uncovered edge, add both its endpoints to the cover, and remove all edges incident to either vertex from the graph. As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process (since it forms a matching), the vertex cover produced, therefore, is at most twice as large as the optimal one. In other words, this is a constant-factor approximation algorithm with an approximation factor of 2. Under the recent unique games conjecture, this factor is even the best possible one.
NP-hard problems vary greatly in their approximability; some, such as the knapsack problem, can be approximated within a multiplicative factor , for any fixed , and therefore produce solutions arbitrarily close to the optimum (such a family of approximation algorithms is called a polynomial-time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial, factor unless P = NP, as in the case of the maximum clique problem. Therefore, an important benefit of studying approximation algorithms is a fine-grained classification of the difficulty of various NP-hard problems beyond the one afforded by the theory of NP-completeness. In other words, although NP-complete problems may be equivalent (under polynomial-time reductions) to each other from the perspective of exact solutions, the corresponding optimization problems behave very differently from the perspective of approximate solutions.
Algorithm design techniques
By now there are several established techniques to design approximation algorithms. These include the following ones.
- Greedy algorithm
- Local search
- Enumeration and dynamic programming (which is also often used for parameterized approximations)
- Solving a convex programming relaxation to get a fractional solution. Then converting this fractional solution into a feasible solution by some appropriate rounding. The popular relaxations include the following.
- Linear programming relaxations
- Semidefinite programming relaxations
- Primal-dual methods
- Dual fitting
- Embedding the problem in some metric and then solving the problem on the metric. This is also known as metric embedding.
- Random sampling and the use of randomness in general in conjunction with the methods above.
A posteriori guarantees
While approximation algorithms always provide an a priori worst case guarantee (be it additive or multiplicative), in some cases they also provide an a posteriori guarantee that is often much better. This is often the case for algorithms that work by solving a convex relaxation of the optimization problem on the given input. For example, there is a different approximation algorithm for minimum vertex cover that solves a linear programming relaxation to find a vertex cover that is at most twice the value of the relaxation. Since the value of the relaxation is never larger than the size of the optimal vertex cover, this yields another 2-approximation algorithm. While this is similar to the a priori guarantee of the previous approximation algorithm, the guarantee of the latter can be much better (indeed when the value of the LP relaxation is far from the size of the optimal vertex cover).
Hardness of approximation
Approximation algorithms as a research area is closely related to and informed by inapproximability theory where the non-existence of efficient algorithms with certain approximation ratios is proved (conditioned on widely believed hypotheses such as the P ≠ NP conjecture) by means of reductions. In the case of the metric traveling salesman problem, the best known inapproximability result rules out algorithms with an approximation ratio less than 123/122 ≈ 1.008196 unless P = NP, Karpinski, Lampis, Schmied. Coupled with the knowledge of the existence of Christofides' 1.5 approximation algorithm, this tells us that the threshold of approximability for metric traveling salesman (if it exists) is somewhere between 123/122 and 1.5.
While inapproximability results have been proved since the 1970s, such results were obtained by ad hoc means and no systematic understanding was available at the time. It is only since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set and the famous PCP theorem, that modern tools for proving inapproximability results were uncovered. The PCP theorem, for example, shows that Johnson's 1974 approximation algorithms for Max SAT, set cover, independent set and coloring all achieve the optimal approximation ratio, assuming P ≠ NP.
Practicality
Not all approximation algorithms are suitable for direct practical applications. Some involve solving non-trivial linear programming/semidefinite relaxations (which may themselves invoke the ellipsoid algorithm), complex data structures, or sophisticated algorithmic techniques, leading to difficult implementation issues or improved running time performance (over exact algorithms) only on impractically large inputs. Implementation and running time issues aside, the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice. Despite their inability to be used "out of the box" in practical applications, the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms. In this way, the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights.
In other cases, even if the initial results are of purely theoretical interest, over time, with an improved understanding, the algorithms may be refined to become more practical. One such example is the initial PTAS for Euclidean TSP by Sanjeev Arora (and independently by Joseph Mitchell) which had a prohibitive running time of for a approximation. Yet, within a year these ideas were incorporated into a near-linear time algorithm for any constant .
Structure of approximation algorithms
Given an optimization problem:
where is an approximation problem, the set of inputs and the set of solutions, we can define the cost function:
and the set of feasible solutions:
finding the best solution for a maximization or a minimization problem:
,
Given a feasible solution , with , we would want a guarantee of the quality of the solution, which is a performance to be guaranteed (approximation factor).
Specifically, having , the algorithm has an approximation factor (or approximation ratio) of if , we have:
- for a minimization problem: , which in turn means the solution taken by the algorithm divided by the optimal solution achieves a ratio of ;
- for a maximization problem: , which in turn means the optimal solution divided by the solution taken by the algorithm achieves a ratio of ;
The approximation can be proven tight (tight approximation) by demonstrating that there exist instances where the algorithm performs at the approximation limit, indicating the tightness of the bound. In this case, it's enough to construct an input instance designed to force the algorithm into a worst-case scenario.
Performance guarantees
For some approximation algorithms it is possible to prove certain properties about the approximation of th.e optimum result. For example, a ρ-approximation algorithm A is defined to be an algorithm for which it has been proven that the value/cost, f(x), of the approximate solution A(x) to an instance x will not be more (or less, depending on the situation) than a factor ρ times the value, OPT, of an optimum solution.
The factor ρ is called the relative performance guarantee. An approximation algorithm has an absolute performance guarantee or bounded error c, if it has been proven for every instance x that
Similarly, the performance guarantee, R(x,y), of a solution y to an instance x is defined as
where f(y) is the value/cost of the solution y for the instance x. Clearly, the performance guarantee is greater than or equal to 1 and equal to 1 if and only if y is an optimal solution. If an algorithm A guarantees to return solutions with a performance guarantee of at most r(n), then A is said to be an r(n)-approximation algorithm and has an approximation ratio of r(n). Likewise, a problem with an r(n)-approximation algorithm is said to be r(n)-approximable or have an approximation ratio of r(n).
For minimization problems, the two different guarantees provide the same result and that for maximization problems, a relative performance guarantee of ρ is equivalent to a performance guarantee of . In the literature, both definitions are common but it is clear which definition is used since, for maximization problems, as ρ ≤ 1 while r ≥ 1.
The absolute performance guarantee of some approximation algorithm A, where x refers to an instance of a problem, and where is the performance guarantee of A on x (i.e. ρ for problem instance x) is:
That is to say that is the largest bound on the approximation ratio, r, that one sees over all possible instances of the problem. Likewise, the asymptotic performance ratio is:
That is to say that it is the same as the absolute performance ratio, with a lower bound n on the size of problem instances. These two types of ratios are used because there exist algorithms where the difference between these two is significant.
r-approx | ρ-approx | rel. error | rel. error | norm. rel. error | abs. error | |
---|---|---|---|---|---|---|
max: | ||||||
min: |
Epsilon terms
In the literature, an approximation ratio for a maximization (minimization) problem of c - ϵ (min: c + ϵ) means that the algorithm has an approximation ratio of c ∓ ϵ for arbitrary ϵ > 0 but that the ratio has not (or cannot) be shown for ϵ = 0. An example of this is the optimal inapproximability — inexistence of approximation — ratio of 7 / 8 + ϵ for satisfiable MAX-3SAT instances due to Johan Håstad. As mentioned previously, when c = 1, the problem is said to have a polynomial-time approximation scheme.
An ϵ-term may appear when an approximation algorithm introduces a multiplicative error and a constant error while the minimum optimum of instances of size n goes to infinity as n does. In this case, the approximation ratio is c ∓ k / OPT = c ∓ o(1) for some constants c and k. Given arbitrary ϵ > 0, one can choose a large enough N such that the term k / OPT < ϵ for every n ≥ N. For every fixed ϵ, instances of size n < N can be solved by brute force, thereby showing an approximation ratio — existence of approximation algorithms with a guarantee — of c ∓ ϵ for every ϵ > 0.
See also
- Domination analysis considers guarantees in terms of the rank of the computed solution.
- PTAS - a type of approximation algorithm that takes the approximation ratio as a parameter
- Parameterized approximation algorithm - a type of approximation algorithm that runs in FPT time
- APX is the class of problems with some constant-factor approximation algorithm
- Approximation-preserving reduction
- Exact algorithm
Citations
This article includes a list of general references, but it lacks sufficient corresponding inline citations.(April 2009) |
- Bernard., Shmoys, David (2011). The design of approximation algorithms. Cambridge University Press. ISBN 9780521195270. OCLC 671709856.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Lenstra, Jan Karel; Shmoys, David B.; Tardos, Éva (1990-01-01). "Approximation algorithms for scheduling unrelated parallel machines". Mathematical Programming. 46 (1–3): 259–271. CiteSeerX 10.1.1.115.708. doi:10.1007/BF01585745. ISSN 0025-5610. S2CID 206799898.
- Agrawal, Ajit; Klein, Philip; Ravi, R. (1991). "When trees collide". Proceedings of the twenty-third annual ACM symposium on Theory of computing - STOC '91. New Orleans, Louisiana, United States: ACM Press. pp. 134–144. doi:10.1145/103418.103437. ISBN 978-0-89791-397-3. S2CID 1245448.
- Goemans, Michel X.; Williamson, David P. (November 1995). "Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming". J. ACM. 42 (6): 1115–1145. CiteSeerX 10.1.1.34.8500. doi:10.1145/227683.227684. ISSN 0004-5411. S2CID 15794408.
- Khot, Subhash; Regev, Oded (2008-05-01). "Vertex cover might be hard to approximate to within 2−ε". Journal of Computer and System Sciences. Computational Complexity 2003. 74 (3): 335–349. doi:10.1016/j.jcss.2007.06.019.
- Karpinski, Marek; Lampis, Michael; Schmied, Richard (2015-12-01). "New inapproximability bounds for TSP". Journal of Computer and System Sciences. 81 (8): 1665–1677. arXiv:1303.6437. doi:10.1016/j.jcss.2015.06.003.
- Feige, Uriel; Goldwasser, Shafi; Lovász, Laszlo; Safra, Shmuel; Szegedy, Mario (March 1996). "Interactive Proofs and the Hardness of Approximating Cliques". J. ACM. 43 (2): 268–292. doi:10.1145/226643.226652. ISSN 0004-5411.
- Arora, Sanjeev; Safra, Shmuel (January 1998). "Probabilistic Checking of Proofs: A New Characterization of NP". J. ACM. 45 (1): 70–122. doi:10.1145/273865.273901. ISSN 0004-5411. S2CID 751563.
- Johnson, David S. (1974-12-01). "Approximation algorithms for combinatorial problems". Journal of Computer and System Sciences. 9 (3): 256–278. doi:10.1016/S0022-0000(74)80044-9.
- Arora, S. (October 1996). "Polynomial time approximation schemes for Euclidean TSP and other geometric problems". Proceedings of 37th Conference on Foundations of Computer Science. pp. 2–11. CiteSeerX 10.1.1.32.3376. doi:10.1109/SFCS.1996.548458. ISBN 978-0-8186-7594-2. S2CID 1499391.
- Arora, S. (October 1997). "Nearly linear time approximation schemes for Euclidean TSP and other geometric problems". Proceedings 38th Annual Symposium on Foundations of Computer Science. pp. 554–563. doi:10.1109/SFCS.1997.646145. ISBN 978-0-8186-8197-4. S2CID 10656723.
- G. Ausiello; P. Crescenzi; G. Gambosi; V. Kann; A. Marchetti-Spaccamela; M. Protasi (1999). Complexity and Approximation: Combinatorial Optimization Problems and their Approximability Properties.
- Viggo Kann (1992). On the Approximability of NP-complete Optimization Problems (PDF).
- Johan Håstad (1999). "Some Optimal Inapproximability Results". Journal of the ACM. 48 (4): 798–859. CiteSeerX 10.1.1.638.2808. doi:10.1145/502090.502098. S2CID 5120748.
References
- Vazirani, Vijay V. (2003). Approximation Algorithms. Berlin: Springer. ISBN 978-3-540-65367-7.
- Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 35: Approximation Algorithms, pp. 1022–1056.
- Dorit S. Hochbaum, ed. , PWS Publishing Company, 1997. ISBN 0-534-94968-1. Chapter 9: Various Notions of Approximations: Good, Better, Best, and More
- Williamson, David P.; Shmoys, David B. (April 26, 2011), The Design of Approximation Algorithms, Cambridge University Press, ISBN 978-0521195270
External links
- Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski and Gerhard Woeginger, A compendium of NP optimization problems.
In computer science and operations research approximation algorithms are efficient algorithms that find approximate solutions to optimization problems in particular NP hard problems with provable guarantees on the distance of the returned solution to the optimal one Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P NP conjecture Under this conjecture a wide class of optimization problems cannot be solved exactly in polynomial time The field of approximation algorithms therefore tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time In an overwhelming majority of the cases the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i e the optimal solution is always guaranteed to be within a predetermined multiplicative factor of the returned solution However there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra Shmoys and Tardos for scheduling on unrelated parallel machines The design and analysis of approximation algorithms crucially involves a mathematical proof certifying the quality of the returned solutions in the worst case This distinguishes them from heuristics such as annealing or genetic algorithms which find reasonably good solutions on some inputs but provide no clear indication at the outset on when they may succeed or fail There is widespread interest in theoretical computer science to better understand the limits to which we can approximate certain famous optimization problems For example one of the long standing open questions in computer science is to determine whether there is an algorithm that outperforms the 2 approximation for the Steiner Forest problem by Agrawal et al The desire to understand hard optimization problems from the perspective of approximability is motivated by the discovery of surprising mathematical connections and broadly applicable techniques to design algorithms for hard optimization problems One well known example of the former is the Goemans Williamson algorithm for maximum cut which solves a graph theoretic problem using high dimensional geometry IntroductionA simple example of an approximation algorithm is one for the minimum vertex cover problem where the goal is to choose the smallest set of vertices such that every edge in the input graph contains at least one chosen vertex One way to find a vertex cover is to repeat the following process find an uncovered edge add both its endpoints to the cover and remove all edges incident to either vertex from the graph As any vertex cover of the input graph must use a distinct vertex to cover each edge that was considered in the process since it forms a matching the vertex cover produced therefore is at most twice as large as the optimal one In other words this is a constant factor approximation algorithm with an approximation factor of 2 Under the recent unique games conjecture this factor is even the best possible one NP hard problems vary greatly in their approximability some such as the knapsack problem can be approximated within a multiplicative factor 1 ϵ displaystyle 1 epsilon for any fixed ϵ gt 0 displaystyle epsilon gt 0 and therefore produce solutions arbitrarily close to the optimum such a family of approximation algorithms is called a polynomial time approximation scheme or PTAS Others are impossible to approximate within any constant or even polynomial factor unless P NP as in the case of the maximum clique problem Therefore an important benefit of studying approximation algorithms is a fine grained classification of the difficulty of various NP hard problems beyond the one afforded by the theory of NP completeness In other words although NP complete problems may be equivalent under polynomial time reductions to each other from the perspective of exact solutions the corresponding optimization problems behave very differently from the perspective of approximate solutions Algorithm design techniquesBy now there are several established techniques to design approximation algorithms These include the following ones Greedy algorithm Local search Enumeration and dynamic programming which is also often used for parameterized approximations Solving a convex programming relaxation to get a fractional solution Then converting this fractional solution into a feasible solution by some appropriate rounding The popular relaxations include the following Linear programming relaxations Semidefinite programming relaxations Primal dual methods Dual fitting Embedding the problem in some metric and then solving the problem on the metric This is also known as metric embedding Random sampling and the use of randomness in general in conjunction with the methods above A posteriori guaranteesWhile approximation algorithms always provide an a priori worst case guarantee be it additive or multiplicative in some cases they also provide an a posteriori guarantee that is often much better This is often the case for algorithms that work by solving a convex relaxation of the optimization problem on the given input For example there is a different approximation algorithm for minimum vertex cover that solves a linear programming relaxation to find a vertex cover that is at most twice the value of the relaxation Since the value of the relaxation is never larger than the size of the optimal vertex cover this yields another 2 approximation algorithm While this is similar to the a priori guarantee of the previous approximation algorithm the guarantee of the latter can be much better indeed when the value of the LP relaxation is far from the size of the optimal vertex cover Hardness of approximationApproximation algorithms as a research area is closely related to and informed by inapproximability theory where the non existence of efficient algorithms with certain approximation ratios is proved conditioned on widely believed hypotheses such as the P NP conjecture by means of reductions In the case of the metric traveling salesman problem the best known inapproximability result rules out algorithms with an approximation ratio less than 123 122 1 008196 unless P NP Karpinski Lampis Schmied Coupled with the knowledge of the existence of Christofides 1 5 approximation algorithm this tells us that the threshold of approximability for metric traveling salesman if it exists is somewhere between 123 122 and 1 5 While inapproximability results have been proved since the 1970s such results were obtained by ad hoc means and no systematic understanding was available at the time It is only since the 1990 result of Feige Goldwasser Lovasz Safra and Szegedy on the inapproximability of Independent Set and the famous PCP theorem that modern tools for proving inapproximability results were uncovered The PCP theorem for example shows that Johnson s 1974 approximation algorithms for Max SAT set cover independent set and coloring all achieve the optimal approximation ratio assuming P NP PracticalityNot all approximation algorithms are suitable for direct practical applications Some involve solving non trivial linear programming semidefinite relaxations which may themselves invoke the ellipsoid algorithm complex data structures or sophisticated algorithmic techniques leading to difficult implementation issues or improved running time performance over exact algorithms only on impractically large inputs Implementation and running time issues aside the guarantees provided by approximation algorithms may themselves not be strong enough to justify their consideration in practice Despite their inability to be used out of the box in practical applications the ideas and insights behind the design of such algorithms can often be incorporated in other ways in practical algorithms In this way the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights In other cases even if the initial results are of purely theoretical interest over time with an improved understanding the algorithms may be refined to become more practical One such example is the initial PTAS for Euclidean TSP by Sanjeev Arora and independently by Joseph Mitchell which had a prohibitive running time of nO 1 ϵ displaystyle n O 1 epsilon for a 1 ϵ displaystyle 1 epsilon approximation Yet within a year these ideas were incorporated into a near linear time O nlog n displaystyle O n log n algorithm for any constant ϵ gt 0 displaystyle epsilon gt 0 Structure of approximation algorithmsGiven an optimization problem P I S displaystyle Pi I times S where P displaystyle Pi is an approximation problem I displaystyle I the set of inputs and S displaystyle S the set of solutions we can define the cost function c S R displaystyle c S rightarrow mathbb R and the set of feasible solutions i I S i s S iPs displaystyle forall i in I S i s in S i Pi s finding the best solution s displaystyle s for a maximization or a minimization problem s S i displaystyle s in S i c s min max c S i displaystyle c s min max c S i Given a feasible solution s S i displaystyle s in S i with s s displaystyle s neq s we would want a guarantee of the quality of the solution which is a performance to be guaranteed approximation factor Specifically having AP i Si displaystyle A Pi i in S i the algorithm has an approximation factor or approximation ratio of r n displaystyle rho n if i I s t i n displaystyle forall i in I s t i n we have for a minimization problem c AP i c s i r n displaystyle frac c A Pi i c s i leq rho n which in turn means the solution taken by the algorithm divided by the optimal solution achieves a ratio of r n displaystyle rho n for a maximization problem c s i c AP i r n displaystyle frac c s i c A Pi i leq rho n which in turn means the optimal solution divided by the solution taken by the algorithm achieves a ratio of r n displaystyle rho n The approximation can be proven tight tight approximation by demonstrating that there exist instances where the algorithm performs at the approximation limit indicating the tightness of the bound In this case it s enough to construct an input instance designed to force the algorithm into a worst case scenario Performance guaranteesFor some approximation algorithms it is possible to prove certain properties about the approximation of th e optimum result For example a r approximation algorithm A is defined to be an algorithm for which it has been proven that the value cost f x of the approximate solution A x to an instance x will not be more or less depending on the situation than a factor r times the value OPT of an optimum solution OPT f x rOPT if r gt 1 rOPT f x OPT if r lt 1 displaystyle begin cases mathrm OPT leq f x leq rho mathrm OPT qquad mbox if rho gt 1 rho mathrm OPT leq f x leq mathrm OPT qquad mbox if rho lt 1 end cases The factor r is called the relative performance guarantee An approximation algorithm has an absolute performance guarantee or bounded error c if it has been proven for every instance x that OPT c f x OPT c displaystyle mathrm OPT c leq f x leq mathrm OPT c Similarly the performance guarantee R x y of a solution y to an instance x is defined as R x y max OPTf y f y OPT displaystyle R x y max left frac OPT f y frac f y OPT right where f y is the value cost of the solution y for the instance x Clearly the performance guarantee is greater than or equal to 1 and equal to 1 if and only if y is an optimal solution If an algorithm A guarantees to return solutions with a performance guarantee of at most r n then A is said to be an r n approximation algorithm and has an approximation ratio of r n Likewise a problem with an r n approximation algorithm is said to be r n approximable or have an approximation ratio of r n For minimization problems the two different guarantees provide the same result and that for maximization problems a relative performance guarantee of r is equivalent to a performance guarantee of r r 1 displaystyle r rho 1 In the literature both definitions are common but it is clear which definition is used since for maximization problems as r 1 while r 1 The absolute performance guarantee PA displaystyle mathrm P A of some approximation algorithm A where x refers to an instance of a problem and where RA x displaystyle R A x is the performance guarantee of A on x i e r for problem instance x is PA inf r 1 RA x r x displaystyle mathrm P A inf r geq 1 mid R A x leq r forall x That is to say that PA displaystyle mathrm P A is the largest bound on the approximation ratio r that one sees over all possible instances of the problem Likewise the asymptotic performance ratio RA displaystyle R A infty is RA inf r 1 n Z RA x r x x n displaystyle R A infty inf r geq 1 mid exists n in mathbb Z R A x leq r forall x x geq n That is to say that it is the same as the absolute performance ratio with a lower bound n on the size of problem instances These two types of ratios are used because there exist algorithms where the difference between these two is significant Performance guarantees r approx r approx rel error rel error norm rel error abs errormax f x displaystyle f x geq r 1OPT displaystyle r 1 mathrm OPT rOPT displaystyle rho mathrm OPT 1 c OPT displaystyle 1 c mathrm OPT 1 c OPT displaystyle 1 c mathrm OPT 1 c OPT cWORST displaystyle 1 c mathrm OPT c mathrm WORST OPT c displaystyle mathrm OPT c min f x displaystyle f x leq rOPT displaystyle r mathrm OPT rOPT displaystyle rho mathrm OPT 1 c OPT displaystyle 1 c mathrm OPT 1 c 1OPT displaystyle 1 c 1 mathrm OPT 1 c 1OPT cWORST displaystyle 1 c 1 mathrm OPT c mathrm WORST OPT c displaystyle mathrm OPT c Epsilon termsIn the literature an approximation ratio for a maximization minimization problem of c ϵ min c ϵ means that the algorithm has an approximation ratio of c ϵ for arbitrary ϵ gt 0 but that the ratio has not or cannot be shown for ϵ 0 An example of this is the optimal inapproximability inexistence of approximation ratio of 7 8 ϵ for satisfiable MAX 3SAT instances due to Johan Hastad As mentioned previously when c 1 the problem is said to have a polynomial time approximation scheme An ϵ term may appear when an approximation algorithm introduces a multiplicative error and a constant error while the minimum optimum of instances of size n goes to infinity as n does In this case the approximation ratio is c k OPT c o 1 for some constants c and k Given arbitrary ϵ gt 0 one can choose a large enough N such that the term k OPT lt ϵ for every n N For every fixed ϵ instances of size n lt N can be solved by brute force thereby showing an approximation ratio existence of approximation algorithms with a guarantee of c ϵ for every ϵ gt 0 See alsoDomination analysis considers guarantees in terms of the rank of the computed solution PTAS a type of approximation algorithm that takes the approximation ratio as a parameter Parameterized approximation algorithm a type of approximation algorithm that runs in FPT time APX is the class of problems with some constant factor approximation algorithm Approximation preserving reduction Exact algorithmCitationsThis article includes a list of general references but it lacks sufficient corresponding inline citations Please help to improve this article by introducing more precise citations April 2009 Learn how and when to remove this message Bernard Shmoys David 2011 The design of approximation algorithms Cambridge University Press ISBN 9780521195270 OCLC 671709856 a href wiki Template Cite book title Template Cite book cite book a CS1 maint multiple names authors list link Lenstra Jan Karel Shmoys David B Tardos Eva 1990 01 01 Approximation algorithms for scheduling unrelated parallel machines Mathematical Programming 46 1 3 259 271 CiteSeerX 10 1 1 115 708 doi 10 1007 BF01585745 ISSN 0025 5610 S2CID 206799898 Agrawal Ajit Klein Philip Ravi R 1991 When trees collide Proceedings of the twenty third annual ACM symposium on Theory of computing STOC 91 New Orleans Louisiana United States ACM Press pp 134 144 doi 10 1145 103418 103437 ISBN 978 0 89791 397 3 S2CID 1245448 Goemans Michel X Williamson David P November 1995 Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming J ACM 42 6 1115 1145 CiteSeerX 10 1 1 34 8500 doi 10 1145 227683 227684 ISSN 0004 5411 S2CID 15794408 Khot Subhash Regev Oded 2008 05 01 Vertex cover might be hard to approximate to within 2 e Journal of Computer and System Sciences Computational Complexity 2003 74 3 335 349 doi 10 1016 j jcss 2007 06 019 Karpinski Marek Lampis Michael Schmied Richard 2015 12 01 New inapproximability bounds for TSP Journal of Computer and System Sciences 81 8 1665 1677 arXiv 1303 6437 doi 10 1016 j jcss 2015 06 003 Feige Uriel Goldwasser Shafi Lovasz Laszlo Safra Shmuel Szegedy Mario March 1996 Interactive Proofs and the Hardness of Approximating Cliques J ACM 43 2 268 292 doi 10 1145 226643 226652 ISSN 0004 5411 Arora Sanjeev Safra Shmuel January 1998 Probabilistic Checking of Proofs A New Characterization of NP J ACM 45 1 70 122 doi 10 1145 273865 273901 ISSN 0004 5411 S2CID 751563 Johnson David S 1974 12 01 Approximation algorithms for combinatorial problems Journal of Computer and System Sciences 9 3 256 278 doi 10 1016 S0022 0000 74 80044 9 Arora S October 1996 Polynomial time approximation schemes for Euclidean TSP and other geometric problems Proceedings of 37th Conference on Foundations of Computer Science pp 2 11 CiteSeerX 10 1 1 32 3376 doi 10 1109 SFCS 1996 548458 ISBN 978 0 8186 7594 2 S2CID 1499391 Arora S October 1997 Nearly linear time approximation schemes for Euclidean TSP and other geometric problems Proceedings 38th Annual Symposium on Foundations of Computer Science pp 554 563 doi 10 1109 SFCS 1997 646145 ISBN 978 0 8186 8197 4 S2CID 10656723 G Ausiello P Crescenzi G Gambosi V Kann A Marchetti Spaccamela M Protasi 1999 Complexity and Approximation Combinatorial Optimization Problems and their Approximability Properties Viggo Kann 1992 On the Approximability of NP complete Optimization Problems PDF Johan Hastad 1999 Some Optimal Inapproximability Results Journal of the ACM 48 4 798 859 CiteSeerX 10 1 1 638 2808 doi 10 1145 502090 502098 S2CID 5120748 ReferencesVazirani Vijay V 2003 Approximation Algorithms Berlin Springer ISBN 978 3 540 65367 7 Thomas H Cormen Charles E Leiserson Ronald L Rivest and Clifford Stein Introduction to Algorithms Second Edition MIT Press and McGraw Hill 2001 ISBN 0 262 03293 7 Chapter 35 Approximation Algorithms pp 1022 1056 Dorit S Hochbaum ed PWS Publishing Company 1997 ISBN 0 534 94968 1 Chapter 9 Various Notions of Approximations Good Better Best and More Williamson David P Shmoys David B April 26 2011 The Design of Approximation Algorithms Cambridge University Press ISBN 978 0521195270External linksPierluigi Crescenzi Viggo Kann Magnus Halldorsson Marek Karpinski and Gerhard Woeginger A compendium of NP optimization problems