# Complexity and subcomplexities, Part 0

Sometimes one studies certain objects, where each object admits multiple representations or can be obtained by more than one procedure. We are often interested in efficiency. Thus with each representation or procedure we associate a positive real number which measures the complexity (inefficiency) of the representations or procedures. Thus for each object we can consider the minimum or at least the greatest lower bound of all complexities associated with the given object, taken over all representations or procedures which provide our object. It’s only natural that we are interested in computing (or estimating) the complexity of objects.

Given any representation (or procedure)   $A$   of an object   $a$   we obtain an upper bound on the complexity   $\gamma(a)$   of object   $a$   in terms of complexity   $\Gamma(A)$   of the representation   $A$   of object   $a$,   namely:

$\gamma(a)\le \Gamma(A)$

Thus by providing (constructing) representations we approximate the complexity of an object from above.

But how to obtain the lower bounds for complexity (of objects)? I have provided an answer in 1981: by defining the notion of a subcomplexity function, and then by constructing subcomplexities. Each subcomplexity function provides a lower bound for the complexity of not just one object but for each of them!

I introduced a computational complexity of birational numbers $x:=\frac k{2^n}$ in 1981-2 (I worked at Martin Marietta at the time; outside MM, Wayne Lawton was familiar with my manuscript).I did it in the context of parallel processing. The complexity of   $x$   was meant to be the complexity of multiplying by   $x$   an array (an image). I basically solved all important problems in that case. My method involved introducing subcomplexity functions. I even formulated a general theory for generalized universal algebras (just the initial trivial but basic theorem) about the complexity being the greatest of the subcomplexities. Perhaps I will have patience to write it again in this blog (and later on my ipage?). Right now I feel like going back to a silly (but fun! and full of deep number-theoretical open questions) topic of the complexity of natural numbers expressed by three symbols:   $1\ +\ \cdot$   and parenthesis   $(\ \ )$ — or else one would use the Łukasiewicz notation (hm, why not?!). Yes, let me use the (reversed) Łukasiewicz Notation. Anyway, the topic was introduced (as long as I know at this time, but see the note below) by Richard Guy:

• Richard Guy, Open Problems, “Complexity of natural numbers with respect to 1, +, *”, American Mathematical Monthly, 1986-April
• R. K. Guy, Some suspiciously simple sequences, Amer. Math. Monthly 93 (1986), 186-190; 94 (1987), 965; 96 (1989), 905.
• R. K. Guy, Unsolved Problems Number Theory, Sect. F26.
• K. Mahler and J. Popken, Over een Maximumprobleem uit de Rekenkunde (Dutch: On a maximum problem in arithmetic), Nieuw Arch. Wiskunde, (3) 1 (1953), pp. 1-15.
• OEIS — 1+* integer complexity

However I don’t know the content of the above paper by K.Mahler and J.Popken.

When I first saw Guy’s article in the open problems section of Monthly, I immediately duplicated most of the results listed there (it was not difficult). I’ve written about my subcomplexity approach on sci.math and on more than one occasion on pl.sci.matematyka (p.s.m.). Later, when I introduced this topic on p.s.m., one of the participants, Andrzej Komisarski also has easily duplicated most of the known results (in the old-fashion way, without using subcomplexities–I don’t know if A.K or I got any original results up to or at that time; if any of us did, it would not be anything significant). Finally, I posted on p.s.m. (in Polish) a three part post. The second part did have new results, which now I want to reproduce on this blog in English (and I wouldn’t mind to get still more results). The title of my said post on p.s.p. was

Podzlozonosci, Czesc 2 — wh, 2001-03-09