##
###
Nonmonotonic and Paraconsistent Reasoning: From Basic Entailments to Plausible Relations
[chapter]

Ofer Arieli, Arnon Avron

1999
*
Lecture Notes in Computer Science
*

In this paper we develop frameworks for logical systems which are able to reflect not only nonmonotonic patterns of reasoning, but also paraconsistent reasoning. For this we consider a sequence of generalizations of the pioneering works of Gabbay, Kraus, Lehmann, Magidor and Makinson. Our sequence of frameworks culminates in what we call plausible, nonmonotonic, multiple-conclusion consequence relations (which are based on a given monotonic one). Our study yields intuitive justifications for
## more »

... ditions that have been proposed in previous frameworks, and also clarifies the connections among some of these systems. In addition, we present a general method for constructing plausible nonmonotonic relations. This method is based on a multiple-valued semantics, and on Shoham's idea of preferential models. Abstract For every n, we describe an explicit construction of a graph on n vertices with at most O(n 2− ) edges, for = 0.133 . . ., that contains every graph on n vertices with maximum degree 3 as a subgraph. The construction is explicit, but the proof of its properties is based on probabilistic arguments. It is easy to see that each such graph has Ω(n 4 3 ) edges. The study of this problem is motivated by questions in VLSI circuit design. Vera Asodi was awarded the Deutsch Prize for the year 2000 in recognition of her research. Abstract The loss version of the multi-armed bandit problem is carried out in T iterations. At the beginning of any iteration an adversary assigns losses from [0, 1] to each of the K options (also called arms). Then, without knowing the adversary's assignments, we are required to select one out of the K arms, and suffer the loss that was assigned to it. Here we consider the loss game which is the adversarial version of the loss version of the multi-armed bandit problem. In this version no stochastic assumption is made, thus the results hold for any possible assignment of K × T losses. We compete against L opt , the optimal loss, which is the minimal total loss of any consistent choice of an arm in this game, i.e., the performance of the best arm. Our goal is to minimize the regret, the maximization over all possible assignments of losses, of the difference between our expected total loss and L opt . In a previous work Auer, Cesa-Bianchi, Freund and Schapire showed that the regret in the loss game has an upper bound of O(T 1/2 ) and a lower bound of Ω(L 1/2 opt ). Since the losses in the loss game are normalized to the [0,1] range, a loss of 1 is the upper bound on the loss possible in any one iteration. Thus T , the number of iterations, can even be higher than the total loss of the worst consistent choice of an arm (i.e., the performance of the worst arm ). In this work an upper bound of O(L 2/3 opt ) on the regret is presented. Abstract We consider segment intersection searching amidst (possibly intersecting) algebraic arcs in the plane. We show how to preprocess n arcs in time O(n 2+ε ) into a data structure of size O(n 2+ε ), for any ε > 0, such that the k arcs intersecting a query segment can be counted in time O(log n) or reported in time O(log n + k). This problem was extensively studied in restricted settings (e.g., amidst segments, circles or circular arcs), but no solution with comparable performance was previously presented for the general case of possibly intersecting algebraic arcs. Our data structure for the general case matches or improves (sometimes by an order of magnitude) the size of the best previously presented solutions for the special cases. As an immediate application of this result, we obtain an efficient data structure for the triangular windowing problem, which is a generalization of triangular range searching. As another application, the first substantially sub-quadratic algorithm for a red-blue intersection counting problem is derived. We also describe simple data structures for segment intersection searching among disjoint arcs, and ray shooting among algebraic arcs. Abstract This paper shows the problem of finding the closest vector in an n-dimensional lattice to be NP-hard to approximate to within factor n c/ log log n for some constant c > 0. Abstract This paper strengthens the low-error PCP characterization of NP, coming closer to the upper limit of the BGLR conjecture. Namely, we prove that witnesses for membership in any NP language can be verified with a constant number of accesses, and with an error probability exponentially small in the number of bits accessed, where this number is as high as log β n, for any constant β < 1. (The BGLR conjecture claims the same for any β ≤ 1). Our results are in fact stronger, implying the Gap-Quadratic-Solvability problem to be NP-hard even if the equations are restricted to having a constant number of variables. That is, given a system of quadraticequations over a field F (of size up to 2 log β n ), where each equation depends on a constant number of variables, it is NP-hard to decide between the case where there is a common solution for all of the equations, and the case where any assignment satisfies no more than a 2 |F| fraction of them. At the same time, our proof presents a direct construction of a low-degree-test whose error-probability is exponentially small in the number of bits accessed. Such a result was previously known only relying on recursive applications of the entire PCP theorem. Abstract We address the problem of predicting user intentions in cases of pointing ambiguities in graphical user interfaces. We argue that it is possible to heuristically resolve pointing ambiguities using implicit information that resides in natural pointing gestures, thus eliminating the need for explicit interaction methods and encouraging natural human-computer interaction. We present two speed-accuracy measures for predicting the size of the intended target object. These two measures are tested empirically and shown to be valid and robust. Additionally, we demonstrate the use of exact mouse location for disambiguation and the use of estimated movement continuation for predicting intended target objects at early stages of the pointing gesture. Abstract We consider the on-line scheduling problem of jobs with precedence constraints on m parallel identical machines. Each job has a time processing requirement, and may depend on other jobs (has to be processed after them). A job arrives only after its predecessors have been completed. The cost of an algorithm is the time that the last job is completed. We show lower bounds on the competitive ratio of on-line algorithms for this problem in several versions. We prove a lower bound of 2 − 1/m on the competitive ratio of any deterministic algorithm (with or without preemption) and a lower bound of 2−2/(m+1) on the competitive ratio of any randomized algorithm (with or without preemption). The lower bounds for the cases that preemption is allowed require arbitrarily long sequences. If we use only sequences of length O(m 2 ), we can show a lower bound of 2 − 2/(m + 1) on the competitive ratio of deterministic algorithms with preemption, and a lower bound of 2 − O(1/m) on the competitive ratio of any randomized algorithm with preemption. All the lower bounds hold even for sequences of unit jobs only. The best algorithm that is known for this problem is the well known List Scheduling algorithm of Graham. The algorithm is deterministic and does not use preemption. The competitive ratio of this algorithm is 2 − 1/m. Our randomized lower bounds are very close to this bound (a difference of O(1/m)) and our deterministic lower bounds match this bound. Abstract We consider the problem of scheduling a sequence of jobs on m parallel identical machines so as to maximize the minimum load over the machines. This situation corresponds to a case that a system which consists of the m machines is alive (i.e. productive) only when all the machines are alive, and the system should be maintained alive as long as possible. It is well known that any on-line deterministic algorithm for identical machines has a competitive ratio of at least m and that greedy is an m competitive algorithm. In contrast we design an on-line randomized algorithm which is O( √ m log m) competitive and a lower bound of Ω( √ m) for any on-line randomized algorithm. In the case where the weights of the jobs are polynomially related we design an optimal O(log m) competitive randomized algorithm and a matching tight lower bound for any on-line randomized algorithm. In fact, if F is the ratio between the weights of largest job and the smallest job then our randomized algorithm is O(log F ) competitive. A sub-problem that we solve which is interesting in its own right is the problem where the value of the optimal algorithm is known in advance. Here we show a deterministic (constant) 2 − 1 m competitive algorithm. We also show that our algorithm is optimal for two, three and four machines and that no on-line deterministic algorithm can achieve a better competitive ratio than 1.75 for m ≥ 4 machines. Abstract In this paper we derive convergence rates for Q-learning. We show an interesting relationship between the convergence rate and the learning rate used in the Q-learning. For a polynomial learning rate, one which is 1/t ω at time t where ω ∈ (1/2, 1), we show that the convergence rate is polynomial in 1/(1 − γ), where γ is the discount factor. In contrast we show that for a linear learning rate, one which is 1/t at time t, the convergence rate has an exponential dependence on 1/(1 − γ). In addition we show a simple example that proves that this exponential behavior is inherent for a linear learning rate. Abstract Planar maps are fundamental structures in computational geometry. They are used to represent the subdivision of the plane into regions and have numerous applications. We describe the planar map package of Cgal -the Computational Geometry Algorithms Library. We discuss problems that arose in the design and implementation of the package and report the solutions we have found for them. In particular we introduce the two main classes of the design-planar maps and topological maps that enable the convenient separation between geometry and topology. We also describe the geometric traits which make our package flexible by enabling to use it with any family of curves as long as the user supplies a small set of operations for the family. Finally, we present the algorithms we implemented for point location in the map, together with experimental results that compare their performance. Abstract This paper presents a streaming technique for synthetic texture intensive 3D animation sequences. There is a short latency time while downloading the animation, until an initial fraction of the compressed data is read by the client. As the animation is played, the remainder of the data is streamed online seamlessly to the client. The technique exploits frame-to-frame coherence for transmitting geometric and texture streams. Instead of using the original textures of the model, the texture stream consists of view-dependent textures which are generated by rendering offline nearby views. These textures have a strong temporal coherency and can thus be well compressed. As a consequence, the bandwidth of the stream of the view-dependent textures is narrow enough to be transmitted together with the geometry stream over a low bandwidth network. These two streams maintain a small online cache of geometry and view-dependent textures from which the client renders the walkthrough sequence in real-time. The overall data transmitted over the network is an order of magnitude smaller than an MPEG post-rendered sequence with an equivalent image quality. Presented at SIGGRAPH Abstract A Multiple Structural Alignment algorithm is presented in this paper. The algorithm accepts an ensemble of protein structures and finds the largest substructure (core) of C α atoms whose geometric configuration appear in all the molecules of the ensemble (core). Both the detection of this core and the resulting structural alignment are done simultaneously. Other large enough multi-structural superimpositions are detected as well. Our method is based on the Geometric Hashing paradigm and a superimposition clustering technique which represents superimpositions by sets of matching atoms. The algorithm proved to be efficient on real data in a series of experiments. The same method can be applied to any ensemble of molecules (not necessarily proteins) since our basic technique is sequence order independent. Abstract We consider a network providing Differentiated Services (Diffserv) which allow network service providers to offer different levels of Quality of Service (QoS) to different traffic streams. We focus on loss and first show that only trivial bounds could be obtained by means of traditional competitive analysis. Then we introduce a new approach for estimating loss of an online policy called loss-bounded analysis. In loss-bounded analysis the loss of an online policy are bounded by the loss of an optimal offline policy plus a constant fraction of the benefit of an optimal offline policy. We derive tight upper and lower bounds for various settings of Diffserv parameters using the new loss-bounded model. We believe that loss-bounded analysis is an important technique that may complement traditional competitive analysis and provide new insight and interesting results. Abstract We provide a general framework for constructing natural consequence relations for paraconsistent and plausible nonmonotonic reasoning. The framework is based on preferential systems whose preferences are based on the satisfaction of formulas in models. We show that these natural preferential systems that were originally designed for paraconsistent reasoning satisfy a key condition (stopperedness or smoothness) from the theoretical research of nonmonotonic reasoning. Consequently, the nonmonotonic consequence relations that they induce satisfy the desired conditions of plausible consequence relations. Hence our framework encompasses different types of preferential systems that were developed from different motivations of paraconsistent reasoning and non-monotonic reasoning, and reveals an important link between them. Abstract We present TVLA (Three-Valued-Logic Analyzer). TVLA is a "YACC"-like framework for automatically constructing static-analysis algorithms from an operational semantics, where the operational semantics is specified using logical formulae. TVLA has been implemented in Java and was successfully used to perform shape analysis on programs manipulating linked data structures (singly and doubly linked lists), to prove safety properties of Mobile Ambients, and to verify the partial correctness of several sorting programs. Abstract The competitive analysis fails to model locality of reference in the online paging problem. To deal with it, Borodin et. al. introduced the access graph model for the paging problem, which attempts to capture the locality of reference. However, the access graph model has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. In this paper we present truly online strongly competitive paging algorithms in the access graph model that do not have any prior information on the access sequence. We present both deterministic and randomized algorithms. The algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space. I.e., asymptotically no more memory than needed to store the virtual address translation table. In fact, it can be reduced to O(k log k) bits using appropriate probabilistic data structures. Abstract Unfair metrical task systems are a generalization of online metrical task systems. In this paper we introduce new techniques to combine algorithms for unfair metrical task systems and apply these techniques to obtain the following results: 1. Better randomized algorithms for unfair metrical task systems on the uniform metric space. 2. Better randomized algorithms for metrical task systems on general metric spaces, O(log 2 n(log log n) 2 ) competitive, improving on the best previous result of O(log 5 n log log n). 3. A tight randomized competitive ratio for the k-weighted caching problem on k + 1 points, O(log k), improving on the best previous result of O(log 2 k). Presented at the thirty-second annual ACM Symposium on Theory of Computing, Abstract This paper gives a nearly logarithmic lower bound on the randomized competitive ratio for the Metrical Task Systems model [BLS92]. This implies a similar lower bound for the extensively studied K-server problem. Our proof is based on proving a Ramsey-type theorem for metric spaces. In particular we prove that in every metric space there exists a large subspace which is approximately a "hierarchically well-separated tree" (HST) [Bar96]. This theorem may be of independent interest. Abstract Optical mapping is a novel technique for generating the restriction map of a DNA molecule by observing many single, partially digested copies of it, using fluorescence microscopy. The real-life problem is complicated by numerous factors: false positive and false negative cut observations, inaccurate location measurements, unknown orientations and faulty molecules. We present an algorithm for solving the real-life problem. The algorithm combines continuous optimization and combinatorial algorithms, applied to a non-uniform discretization of the data. We present encouraging results on real experimental data, and on simulated data. Abstract We consider the classical problem of scheduling jobs in a multiprocessor setting in order to minimize the flow time (total time in the system). The performance of the algorithm, both in offline and online settings, can be significantly improved if we allow preemption: i.e., interrupt a job and later continue its execution, perhaps migrating it to a different machine. Preemption is inherent to make a scheduling algorithm efficient. While in case of a single processor, most operating systems can easily handle preemptions, migrating a job to a different machine results in a huge overhead. Thus, it is not commonly used in most multiprocessor operating systems. The natural question is whether migration is an inherent component for an efficient scheduling algorithm, in either online or offline setting. Leonardi and Raz (STOC'97) showed that the well known algorithm, shortest remaining processing time (SRPT), performs within a logarithmic factor of the optimal algorithm. Note that SRPT must use both preemption and migration to schedule the jobs. It is not known if better approximation factors can be reached. In fact, in the on-line setting, Leonardi and Raz showed that no algorithm can achieve a better bound. Without migration, no (offline or online) approximations are known. This paper introduces a new algorithm that does not use migration, works online, and is just as effective (in terms of approximation ratio) as the best known offline algorithm (SRPT) that uses migration. Presented at the Symposium on the Theory of Computing '99. Abstract We provide the first strongly polynomial algorithms with the best approximation ratio for all three variants of the unsplittable flow problem (U F P ). In this problem we are given a (possibly directed) capacitated graph with n vertices and m edges, and a set of terminal pairs each with its own demand and profit. The objective is to connect a subset of the terminal pairs each by a single flow path as to maximize the total profit of the satisfied terminal pairs subject to the capacity constraints. Classical U F P , in which demands must be lower than edge capacities, is known to have an O( √ m) approximation algorithm. We provide the same result with a strongly polynomial combinatorial algorithm. The extended U F P case is when some demands might be higher than edge capacities. For that case we both improve the current best approximation ratio and use strongly polynomial algorithms. We also use a lower bound to show that the extended case is provably harder than the classical case. The last variant is the bounded U F P where demands are at most 1 K of the minimum edge capacity. Using strongly polynomial algorithms here as well, we improve the currently best known algorithms. Specifically, for K = 2 our results are better than the lower bound for classical U F P thereby separating the two problems. Abstract In the minimum fill-in problem, one wishes to find a set of edges of smallest size, whose addition to a given graph will make it chordal. The problem has important applications in numerical algebra and has been studied intensively since the 1970s. We give the first polynomial approximation algorithm for the problem. Our algorithm constructs a triangulation whose size is at most eight times the optimum size squared. The algorithm builds on the recent parameterized algorithm of Kaplan, Shamir and Tarjan for the same problem. For bounded degree graphs we give a polynomial approximation algorithm with a polylogarithmic approximation ratio. We also improve the parameterized algorithm. Abstract In an edge modification problem one has to change the edge set of a given graph as little as possible so as to satisfy a certain property. We prove in this paper the NP-hardness of a variety of edge modification problems with respect to some well-studied classes of graphs. These include perfect, chordal, chain, comparability, split and asteroidal triple free. We show that some of these problems become polynomial when the input graph has bounded degree. We also give a general constant factor approximation algorithm for deletion and editing problems on bounded degree graphs with respect to properties that can be characterized by a finite set of forbidden induced subgraphs. Abstract Let S be a set of n points in R d . A k-set of S is a subset S ⊂ S such that S = S ∩ H for some halfspace H and |S | = k. The problem of determining tight asymptotic bounds on the maximum number of k-sets is one of the most intriguing open problems in combinatorial geometry. Due to its importance in analyzing geometric algorithms, the problem has caught the attention of computational geometers as well. A close to optimal solution for the problem remains elusive even in the plane. The best asymptotic upper and lower bounds in the plane are O(nk 1/3 ) and n·2 Ω( √ log k) , respectively. In this paper we obtain the following result: Theorem: The number of k-sets in a set of n points in R 3 is O(nk 3/2 ). This result improves the previous best known asymptotic upper bound of O(nk 5/3 ) (see Dey and Edelsbrunner and Agarwal et al.). The best known asymptotic lower bound for the number of k-sets in three dimensions is nk · 2 Ω( √ log k) . Abstract Some animals use counter-shading in order to prevent their detection by predators. Counter-shading means that the albedo of the animal is such that its image has a flat intensity function rather than a convex intensity function. This implies that there might exist predators who can detect 3D objects based on the convexity of the intensity function. In this paper, we suggest a mathematical model which describes a possible explanation of this detection ability. We demonstrate the effectiveness of convexity based camouflage breaking using an operator ("D arg ") for detection of 3D convex or concave graylevels. Its high robustness and the biological motivation make D arg particularly suitable for camouflage breaking. As will be demonstrated, the operator is able to break very strong camouflage, which might delude even human viewers. Being non-edge-based, the performance of the operator is juxtaposed with that of a representative edge-based operator in the task of camouflage breaking. Better performance is achieved by Darg for both animal and military camouflage breaking. Abstract Detection of Regions of Interest is usually based on edge maps. We suggest a novel non-edge-based mechanism for detection of regions of interest, which extracts 3D information from the image. Our operator detects smooth 3D convex and concave objects based on direct processing of intensity values. Invariance to a large family of functions is mathematically proved. It follows that our operator is robust to variation in illumination, orientation, and scale, in contrast with most other attentional operators. The operator is also demonstrated to efficiently detect 3D objects camouflaged in noisy areas. An extensive comparison with edge-based attentional operators is delineated. Abstract This paper introduces an implicit representation of the u, v texture mapping. Instead of using the traditional explicit u, v mapping coordinates, a non-distorted piecewise embedding of the triangular mesh is created, on which the original texture is remapped, yielding warped textures. This creates an effective atlas of the mapped triangles and provides a compact encoding of the texture mapping. Abstract We introduce a new notion of 'neighbors' in geometric permutations. We conjecture that the maximum number of neighbors in a set of n pairwise disjoint convex bodies in R d is O(n), and we settle this conjecture for d = 2. We show that if the set of pairs of neighbors in a set S is of size N , then S admits at most O(N d−1 ) geometric permutations. Hence we obtain an alternative proof of a linear upper bound on the number of geometric permutations for any finite family of pairwise disjoint convex bodies in the plane. Abstract We present an efficient online subpath profiling algorithm, OSP, that reports hot subpaths executed by a program in a given run. The hot subpaths can start at arbitrary basic block boundaries, and their identification is important for code optimization; e.g., to locate program traces in which optimizations could be most fruitful, and to help programmers in identifying performance bottlenecks. The OSP algorithm is online in the sense that it reports at any point during execution the hot subpaths as observed so far. It has very low memory and runtime overheads, and exhibits high accuracy in reports for benchmarks such as JLex and FFT. These features make the OSP algorithm potentially attractive for use in just-in-time (JIT) optimizing compilers, in which profiling performance is crucial and it is useful to locate hot subpaths as early as possible. The OSP algorithm is based on an adaptive sampling technique that makes effective utilization of memory with small overhead. Both memory and runtime overheads can be controlled, and the OSP algorithm can therefore be used for arbitrarily large applications, realizing a tradeoff between report accuracy and performance. We have implemented a Java prototype of the OSP algorithm for Java programs. The implementation was tested on programs from the Java Grande benchmark suite and exhibited a low average runtime overhead. Abstract An application, based on parallel coordinates (abbr. -coords) on "approximated planes" was presented at this conference in 2000 by Matskewich. With Parallel coordinates, objects in R n can be represented, without loss of information, by planar patterns for arbitrary n. In R n , embedded in the projective plane, parallel coordinates induce a point ↔ line and other dualities which generalize nicely to R n . In 1981 it was shown that conics are mapped into conics in 6 different ways. Later this was generalized to bounded and unbounded convex sets and eventually applied to higher dimensions. Since then the question of "what is the dual image of general polynomial curves" has not been answered. Here we show that the dual image in -coords of an algebraic curve of degree n is also algebraic of degree n(n − 1) in absence of singular points. Further an algorithm for the construction of the dual even in the presence of singularities is found and presented here. The result is of interest in its right and opens the prospects for extending the multi-dimensional applications. Abstract The construction of complex, evolving software systems requires a high-level design model. However, this model tends not to be enforced on the system, leaving room for the implementors to diverge from it, thus differentiating the designed system from the actual implemented one. The essence of the problem of enforcing such models lies in their globality. The principles and guidelines conveyed by these models cannot be localized in a single module, they must be observed everywhere in the system. A mechanism for enforcement needs to have a global view of the system and to report breaches in the model at the time they occur. Aspect-Oriented Programming has been proposed as a new software engineering approach. Unlike contemporary software engineering methods, which are module centered, Aspect Oriented Programming provides mechanisms for the definition of cross-module interactions. We explore the possibility of using Aspect-Oriented Programming in general and the AspectJ programming language in particular for the enforcement of design models. Abstract We introduce the notion of roundtrip-spanners of weighted directed graphs and describe efficient algorithms for their construction. For every integer k ≥ 1 and any > 0, we show that any directed graph on n vertices with edge weights in the range [1, W ] has a (2k + )-roundtrip-spanner with O( k 2 n 1+1/k log(nW )) edges. We then extend these constructions and obtain compact roundtrip routing schemes. For every integer k ≥ 1 and every > 0, we describe a roundtrip routing scheme that has stretch 4k + , and uses at each vertex a routing table of sizeÕ( k 2 n 1/k log(nW )). We also show that any weighted directed graph with arbitrary positive edge weights has a 3-roundtrip-spanner with O(n 3/2 ) edges. This result is optimal. Finally, we present a stretch 3 roundtrip routing scheme that uses local routing tables of sizeÕ(n 1/2 ). This routing scheme is essentially optimal. The roundtrip-spanner constructions and the roundtrip routing schemes for directed graphs that we describe are only slightly worse than the best available spanners and routing schemes for undirected graphs. Our roundtrip routing schemes substantially improve previous results of Cowen and Wagner. Our results are obtained by combining ideas of Cohen, Cowen and Wagner, Thorup and Zwick, with some new ideas. Presented at the 13th ACM-SIAM Symposium on discrete Algorithms, Abstract The following probabilistic process models the generation of noisy clustering data: Clusters correspond to disjoint sets of vertices in a graph. Each two vertices from the same set are connected by an edge with probability p, and each two vertices from different sets are connected by an edge with probability r < p. The goal of the clustering problem is to reconstruct the clusters from the graph. We give algorithms that solve this problem with high probability. Compared to previous studies, our algorithms have lower time complexity and wider parameter range of applicability. In particular, our algorithms can handle O( √ n/ log n) clusters in an n-vertex graph, while all previous algorithms require that the number of clusters is constant. Abstract A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different p norms. We address this problem by introducing the concept of an All-norm ρ-approximation algorithm, which supplies one solution that guarantees ρ-approximation to all p norms simultaneously. Specifically, we consider the problem of scheduling in the restricted assignment model, where there are m machines and n jobs, each is associated with a subset of the machines and should be assigned to one of them. Previous work considered approximation algorithms for each norm separately. Lenstra et al. (LST90) showed a 2-approximation algorithm for the problem with respect to the ∞ norm. For any fixed p norm the previously known approximation algorithm has a performance of θ(p). We provide an all-norm 2approximation polynomial algorithm for the restricted assignment problem. On the other hand, we show that for any given p norm (p > 1) there is no PTAS unless P=NP by showing an APX-hardness result. We also show for any given p norm a FPTAS for any fixed number of machines. Abstract Detection of feature points in images is an important preprocessing stage for many algorithms in Computer Vision. We address the problem of detection of feature points in video sequences of 3D scenes, which could be mainly used for obtaining scene correspondence. The main feature we use is the zero crossing of the intensity gradient argument. We analytically show that this local feature corresponds to specific constraints on the local 3D geometry of the scene, thus ensuring that the detected points are based on real 3D features. We present a robust algorithm that tracks the detected points along a video sequence, and suggest some criteria for quantitative evaluation of such algorithms. These criteria serve in a comparison of the suggested operator with two other feature trackers. The suggested criteria are generic and could serve other researchers as well for performance evaluation of stable point detectors. Abstract Motivated by a recent application in XML search engines we study the problem of labeling the nodes of a tree (XML file) such that given the labels of two nodes one can determine whether one node is an ancestor of the other. We describe several new prefix-based labeling schemes, where an ancestor query roughly amounts to testing whether one label is a prefix of the other. We compare our new schemes to a simple interval-based scheme currently used by search engines, as well as, to schemes with the best theoretical guarantee on the maximum label length. We performed our experimental evaluation on real XML data and on some families of random trees. Abstract We present a new incremental algorithm for constructing the union of n triangles in the plane. In our experiments, the new algorithm, which we call the Disjoint-Cover (DC) algorithm, performs significantly better than the standard randomized incremental construction (RIC) of the union. Our algorithm is rather hard to analyze rigorously, but we provide an initial such analysis, which yields an upper bound on its performance that is expressed in terms of the expected cost of the RIC algorithm. Our approach and analysis generalize verbatim to the construction of the union of other objects in the plane, and, with slight modifications, to three dimensions. We present experiments with a software implementation of our algorithm using the CGAL library of geometric algorithms. Abstract We introduce a new general scheme for shared memory non-preemptive scheduling policies. Our scheme utilizes a system of inequalities and thresholds and accepts a packet if it does not violate any of the inequalities. We demonstrate that many of the existing policies can be described using our scheme, thus validating its generality. We propose a new scheduling policy, based on our general scheme, which we call the Harmonic policy. Our simulations show that the Harmonic policy both achieves high throughput and easily adapts to changing load conditions. We also perform a theoretical analysis of the Harmonic policy and demonstrate that its throughput competitive ratio is almost optimal. Presented at the INFOCOM'02. Abstract A nearly logarithmic lower bound on the randomized competitive ratio for the metrical task systems problem is presented. This implies a similar lower bound for the extensively studied K-server problem. The proof is based on Ramsey-type theorems for metric spaces, that state that every metric space contains a large subspace which is approximately a "hierarchically well-separated tree" (HST) (and in particular an ultrametric). These Ramsey-type theorems may be of independent interest. Abstract We study the potential impact of different kinds of liveness information on the space consumption of a program in a garbage collected environment, specifically for Java. The idea is to measure the time difference between the actual time an object is collected by the garbage collector (GC) and the potential earliest time an object could be collected assuming liveness information were available. We focus on the following kinds of liveness information: (i) stack reference liveness (local reference variable liveness in Java), (ii) global reference liveness (static reference variable liveness in Java), (iii) heap reference liveness (instance reference variable liveness or array reference liveness in Java), and (vi) any combination of (i)-(iii). We also provide some insights on the kind of interface between a compiler and GC that could achieve these potential savings. The Java Virtual Machine (JVM) was instrumented to measure (dynamic) liveness information. Experimental results are given for 10 benchmarks, including 5 of the SPEC-jvm98 benchmark suite. We show that in general stack reference liveness may yield small benefits, global reference liveness combined with stack reference liveness may yield medium benefits, and heap reference liveness yields the largest potential benefit. Abstract We obtain substantially improved approximation algorithms for the MIN k-SAT problem, for k = 2, 3. More specifically, we obtain a 1.1037-approximation algorithm for the MIN 2-SAT problem, improving a previous 1.5-approximation algorithm, and a 1.2136-approximation algorithm for the MIN 3-SAT problem, improving a previous 1.75-approximation algorithm for the problem. These results are obtained by adapting techniques that were previously used to obtain approximation algorithms for the MAX k-SAT problem. We also obtain some hardness of approximation results. Abstract Web caches have become an integral component contributing to the improvement of the performance observed by Web clients. Content Distribution Networks (CDN) and Cache Satellite Distribution Systems (CSDS) have emerged as technologies for feeding the caches with the information clients are expected to request, ahead of time. In a Cache Satellite Distribution System (CSDS), the proxies participating in the CSDS periodically report to a central station about the requests they are receiving from their clients. The central station processes this information and selects a collection of Web documents (or Web pages), which it then "pushes" via a satellite broadcast to all, or some, of the participating proxies, hoping most of them will request most documents in the near future. The result is that upon such request, the documents will reside in the local cache, and will not need to be fetch. In this paper we aim at addressing the issues of how to operate the CSDS, how to design it, and how to estimate its effect. Questions of interest are 1) What classes of Web documents should be transmitted by the central station, and how they are characterized, and 2) What is the benefit of adding a particular proxy into a CSDS. We offer a model of this system that accounts for the request streams addressed to the proxies and which captures the intricate interaction between the proxy caches. Unlike models that are based only on the access frequency of the various documents, this model captures both their frequency and their locality of reference. We provide an analysis of this system that is based on the stochastic properties of the traffic streams that can be derived from HTTP logs. The model and analysis can serve as a basis for the design and efficient operation of the system. Abstract The notion of Internet Policy Atoms has been recently introduced by Andre Broido and kc claffy from CAIDA as groups of prefixes sharing a common BGP AS path at any Internet backbone router. In this paper we further research these 'Atoms'. First we offer a new method for computing the Internet policy atoms, and use the RIPE RIS database to derive their structure. Second, we show that atoms remain stable with only about 2-3% of prefixes changing their atom membership in eight hour periods. We support the 'Atomic' nature of the policy atoms by showing BGP update and withdraw notifications carry updates for complete atoms in over 70% of updates, while the complete set of prefixes in an AS is carried in only 21% of updates. We track the locations where atoms are created (first different AS in the AS path going back from the common origin AS 4 ) showing 86% are split between the origin AS and it's peers thus supporting the assumption that they are created by policies. Finally applying atoms to "real life" applications we achieve a modest savings in BGP updates due to the low average prefix count in the atoms. Abstract This paper addresses the problem of establishing temporal properties of programs written in languages, such as Java, that make extensive use of the heap to allocate-and deallocate-new objects and threads. Establishing liveness properties is a particularly hard challenge. One of the crucial obstacles is that heap locations have no static names and the number of heap locations is unbounded. The paper presents a framework for the verification of Java-like programs. Unlike classical model checking, which uses propositional temporal logic, we use first-order temporal logic to specify temporal properties of heap evolutions; this logic allows domain changes to be expressed, which permits allocation and deallocation to be modelled naturally. Abstract Many computer graphics operations, such as texture mapping, 3D painting, remeshing, mesh compression, and digital geometry processing, require finding a low-distortion parameterization for irregular connectivity triangulations of arbitrary genus 2-manifolds. This paper presents a simple and fast method for computing parameterizations with strictly bounded distortion. The new method operates by flattening the mesh onto a region of the 2D plane. To comply with the distortion bound, the mesh is automatically cut and partitioned on-the-fly. The method guarantees avoiding global and local self-intersections, while attempting to minimize the total length of the introduced seams. To our knowledge, this is the first method to compute the mesh partitioning and the parameterization simultaneously and entirely automatically, while providing guaranteed distortion bounds. Our results on a variety of objects demonstrate that the method is fast enough to work with large complex irregular meshes in interactive applications. Abstract An IR image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of IR images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. IR images are obviously invariant under extreme lighting conditions (including complete darkness). The main findings of this research are that IR face images are less effected by changes of pose or facial expression and enable a simple method for detection of facial features. In this paper we explore several aspects of face recognition in IR images. First, we compare the effect of varying environment conditions over IR and visible light images through a case study. Finally, we propose a method for automatic face recognition in IR images, through which we use a preprocessing algorithm for detecting facial elements, and show the applicability of commonly used face recognition methods in the visible light domain. Abstract Let H be a fixed directed graph on h vertices, let G be a directed graph on n vertices and suppose that at least n 2 edges have to be deleted from it to make it H-free. We show that in this case G contains at least f ( , H)n h copies of H. This is proved by establishing a directed version of Szemerédi's regularity lemma, and implies that for every H there is a one-sided error property tester whose query complexity is bounded by a function of only for testing the property PH of being H-free.

doi:10.1007/3-540-48747-6_2
fatcat:nbkzf3rldrhk5pim4nzehnplom