Year: 2013
9/17 [elc]   ELC Seminar (C01 and A01)
Place : CELC
Date:  2013.9.17 (Tue)
Time:  14:00 - 17:00
Place: CELC seminar room (new one! on the 4the floor)

Speakers:
1.  Sebastian Muller (ELC Postdoc fellow, C01 and A01)
    Polylogarithmic Cuts in Models of Weak Arithmetic

2. Xiaohui Bei (Nanyang Technological Univ., Singpore)
   On the Complexity of Trial and Error

3. Ning Chen (Nanyang Technological Univ., Singpore)
   Solving Linear Programming with Constraints Unknown

Details (with abstracts):

1. Sebastian Muller (ELC postdoc reseaercher, C01 and A01)

Title: Polylogarithmic Cuts in Models of Weak Arithmetic
Abstract:
Propositional proof systems are polynomial time computable surjective
functions that map strings to propositional tautologies. We call any
preimage of a tautology a proof. Properties of such systems are
closely related to the coNP = NP question; in fact, coNP = NP iff
such a proof system exists with short proofs for every tautology.
In this light it is interesting to compare two proof systems by
their minimal lengths of proofs for any family of tautologies. We say
that one proof system simulates another, if for any proof in the
latter system there is one in the former system that is approximately
of the same length. Two examples of propositional proof systems are
Frege systems and bounded depth Frege systems. These systems are
textbook style proof systems as are depicted in most basic logic
literature. They differ in strength, however, and we do not expect
bounded depth Frege systems to simulate Frege systems.

Models of weak arithmetics are well known to capture the strength of
certain such propositional proof systems. We exploit this property
to explore the relation between Frege and bounded depth Frege systems.
More precisely we will show that a certain initial segment of any
model that relates to bounded depth Frege systems, relates to Frege
systems. An easy argument will then allow us to turn Frege proofs of
any given length into bounded depth Frege proofs such that the proof
length is not increased exponentially. This yields a sub-exponential
simulation of Frege by bounded depth Frege systems.

2. Xiaohui Bei (Nanyang Technological Univ., Singpore)
Title: On the Complexity of Trial and Error
Abstract:
Motivated by certain applications in which the objects
under investigation are unknown or not directly accessible because
of various limitations, we propose a trial-and-error model to examine
search problems in which inputs are unknown.

More specifically, we consider constraint satisfaction problems
$\bigwedge_i C_i$, where the constraints $C_i$ are hidden, and the
goal is to find a solution satisfying all constraints. We can
adaptively propose a candidate solution (i.e., trial), and there is
a verification oracle that either confirms that it is a valid solution,
or returns the index $i$ of a violated constraint (i.e., error), with
the exact content of $C_i$ still hidden.

We studied the time and trial complexities of a number of natural
CSPs, summarized as follows. On one hand, despite the seemingly very
little information provided by the oracle, efficient algorithms do
exist for Nash, Core, Stable Matching, and SAT problems, whose
unknown-input versions are shown to be as hard as the corresponding
known-input versions up to a factor of polynomial. The techniques
employed vary considerably, including, e.g., order theory and the
ellipsoid method with a strong separation oracle.

On the other hand, there are problems whose complexities are
substantially increased in the unknown-input model. In particular,
no time-efficient algorithms exist for Graph Isomorphism and Group
Isomorphism (unless PH collapses or P = NP). The proofs use quite
nonstandard reductions, in which an efficient simulator is carefully
designed to simulate a desirable but computationally unaffordable
oracle.

Our model investigates the value of input information, and our
results demonstrate that the lack of input information can introduce
various levels of extra difficulty. The model accommodates a wide
range of combinatorial and algebraic structures, and exhibits
intimate connections with (and hopefully can also serve as a useful
supplement to) certain existing learning and complexity theories

3. Ning Chen
Title: Solving Linear Programming with Constraints Unknown
Abstract:
What is the value of information in solving linear programming?
The celebrated ellipsoid algorithm tells us that the full
information of input constraints is not necessary; the algorithm
works as long as there exists an oracle that, on a proposed
candidate solution, returns a violation in the format of a
separating hyperplane. Can linear programming still be efficiently
solved if the returned violation is in another format?

We study this question in the aforementioned trial-and-error setting.
We give an algorithm with running time $O(m^{poly(n)} * L)$,
where $m$ and $n$ are the numbers of constraints and variables,
respectively, and $L$ is the input size of the linear program.
The exponential dependence on $n$ is unfortunately unavoidable; we
show a lower bound of $\Omega(m^{[n/2]})$ on the number of queries
needed.  Meanwhile, if the oracle provides more violation
information---the index of a "most violated" constraint, measured
by the Euclidean distance of the proposed solution and the
half-spaces defined by the constraints---then we show that the
linear program can be solved in polynomial time.

The proofs of the results employ a variety of geometric techniques,
including McMullen's Upper Bound Theorem, the weighted spherical
Voronoi diagram, and the furthest Voronoi diagram. In addition, we
give an alternative proof to a conjecture of Laszlo Fejes Toth on
bounding the number of disconnected components formed by the
union of $m$ convex bodies in $R^n$. Our proof, inspired by the
Gauss-Bonnet Theorem in global differential geometry, is independent
of the known and clearly reveals more insights into the problem
and bound.

END OF SEMINAR INFO.


horiyama@al.ics.saitama-u.ac.jp