Jump to content

Log-space reduction

From Wikipedia, the free encyclopedia

In computational complexity theory, a log-space reduction is a reduction computable by a deterministic Turing machine using logarithmic space. Conceptually, this means it can keep a constant number of pointers into the input, along with a logarithmic number of fixed-size integers. It is possible that such a machine may not have space to write down its own output, so the only requirement is that any given bit of the output be computable in log-space. Formally, this reduction is executed via a log-space transducer.

Such a machine has polynomially-many configurations, so log-space reductions are also polynomial-time reductions. However, log-space reductions are probably weaker than polynomial-time reductions; while any non-empty, non-full language in P is polynomial-time reducible to any other non-empty, non-full language in P, a log-space reduction from an NL-complete language to a language in L, both of which would be languages in P, would imply the unlikely L = NL. It is an open question if the NP-complete problems are different with respect to log-space and polynomial-time reductions.

Log-space reductions are normally used on languages in P, in which case it usually does not matter whether many-one reductions or Turing reductions are used, since it has been verified that L, SL, NL, and P are all closed under Turing reductions[citation needed], meaning that Turing reductions can be used to show a problem is in any of these classes. However, other subclasses of P such as NC may not be closed under Turing reductions, and so many-one reductions must be used[citation needed].

Just as polynomial-time reductions are useless within P and its subclasses, log-space reductions are useless to distinguish problems in L and its subclasses; in particular, every non-empty, non-full problem in L is trivially L-complete under log-space reductions. While even weaker reductions exist, they are not often used in practice, because complexity classes smaller than L (that is, strictly contained or thought to be strictly contained in L) receive relatively little attention.

The tools available to designers of log-space reductions have been greatly expanded by the result that L = SL; see SL for a list of some SL-complete problems that can now be used as subroutines in log-space reductions.

Logspace computable function

[edit]

A function is (implicitly) logspace computable iff:[1]: 88 

  • Its output length is polynomially bounded: There exists some such that for all .
  • is in complexity class L.
  • is in complexity class L.

Intuitively, the first condition states that the function creates outputs that are short enough, such that creating a single pointer on the output will take only logspace. That condition is necessary in order for pointers on the output to exist at all.

The second condition states that any particular output location is computable in logspace.

The third condition states that checking if a pointer is a valid pointer is decidable in logspace.

Equivalently, A function is logspace computable iff it is computed by a Turing machine with a log-length work tape, that halts on any input, and an output tape that is write-only and write-once, meaning that at each step, the machine may either write nothing, or write a bit and move the write-head forward by one.[1]: 94  Such a machine is usually called a logspace transducer. Note that such a machine, if it halts, must halt in polynomial steps, since its work tape has log-length. Therefore its output length is polynomially bounded.

One intuition is that such a function can be computed by a program that can only keep a constant number of pointers to the input, and a constant number of counters that can only contain integers of size . This is because a counter machine with a constant number of counters that count up to is equivalent to a Turing machine with space complexity .[2]

Closure

[edit]

The most important property of logspace computability is that, if functions are logspace computable, then so is their composition . This allows the concept of logspace reduction to be transitive.

Given two logspace transducers, their composition is still a logspace transducer: feed the output from one transducer (A→B) to another (B→C). At first glance, this seems incorrect because intuitively, the A→C transducer needs to store the output tape from the A→B transducer onto the work tape, in order to feed it into the B→C reducer, but this is not necessary, by the following construction.

Define the A→C transducer as follows: It simulates the operations of B→C transducer. Every time the B→C transducer needs to make a read, the A→C transducer re-run the A→B transducer to re-compute only the exact output bit that is needed, and so only one bit of the output of the A→B transducer needs to be stored at any moment.

Logspace reduction

[edit]

A language is logspace (many-one) reducible to another language , notated as , iff there exists an implicitly logspace computable function such that . This is a transitive relation, because logspace computability is closed under composition, as previously shown.

A language is NL-complete iff it is NL, and any language in NL is logspace reducible to it.

Most naturally-occurring polynomial-time reductions in complexity theory are logspace reductions. In particular, this is true for the standard proof showing that the SAT problem is NP-complete, and that the circuit value problem is P-complete. This is also often the case for showing that the True quantified Boolean formula problem is PSPACE-complete. This is because the need for memory in such reduction constructions is for counting up to for some polynomial in the input length , and this can be done in logarithmic space.[3]: 180 

While logspace many-one reduction implies polynomial time many-one reduction, it is unknown whether this is an equivalence, or whether there are problems that are NP-complete under polynomial time reduction, but not under logspace reduction. Any solution to this problem would solve this problem: Are deterministic linear bounded automata equivalent to nondeterministic linear bounded automata?[4]

References

[edit]
  1. ^ a b Arora, Sanjeev; Barak, Boaz (2009). Computational complexity. A modern approach. Cambridge University Press. ISBN 978-0-521-42426-4. Zbl 1193.68112.
  2. ^ Meyer, Albert R.; Shamos, Michael Ian (1977). "Time and space". In Jones, Anita K. (ed.). Perspectives on Computer Science: From the 10th Anniversary Symposium at the Computer Science Department, Carnegie-Mellon University (PDF). New York: Academic Press. pp. 125–146.
  3. ^ Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman. ISBN 978-0-7167-1045-5. OCLC 4195125.
  4. ^ Lind, John; Meyer, Albert R. (1973-07-01). "A characterization of log-space computable functions". SIGACT News. 5 (3): 26–29. doi:10.1145/1008293.1008295. ISSN 0163-5700.

Further reading

[edit]