I'm a computer science professor at the IUT of Université Clermont Auvergne, doing my research at LIMOS. I'm currently spending a research year at INRIA, Sophia Antipolis, as part of DataShape team. In 2014, I was a temporary professor (ATER) at Université de Montpellier, France, doing my research at LIRMM. Earlier, I was a professor at Unirio, Brazil from 2009 to 2012. In 2008, I was a Postdoc student at COPPE - UFRJ, advised by Celina Figueiredo. I got my PhD at the University of Maryland, College Park in 2007, advised by David Mount.
I'm especially interested in computational geometry, but I like working on all topics related to algorithms (data structures, approximation, graphs, randomization...). Most of my current research is about geometric approximation, where either distances or the size of the solution to some geometric problem are approximated. Examples include approximate nearest neighbor searching and approximating the maximum independent set of a unit disk graph. All my papers and their pdf files are available below.
My teaching experience includes analysis of algorithms, formal languages, data structures, computational geometry, probability, distributed algorithms, and programming languages. Check my cv in English or in French for more details.
Click anywhere on the paper listing to download the article's pdf. Other visualisation options are available on the right-side buttons. You can also find my papers at google scholar citations, dblp, and researchgate.
@inproceedings{kernel_socg, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {Symposium on Computational Geometry (SoCG 2017)}, title = {Near-Optimal $\varepsilon$-Kernel Construction and Related Problems}, pages = {10:1--15}, doi = {10.4230/LIPIcs.SoCG.2017.10}, year = {2017}, }
The computation of (i) ε-kernels, (ii) approximate diameter, and (iii) approximate bichromatic closest pair are fundamental problems in geometric approximation. In each case the input is a set of points in d dimensions for constant d and an approximation parameter ε > 0. In this paper, we describe new algorithms for these problems, achieving significant improvements to the exponent of the ε-dependency in their running times, from roughly d to d / 2 for the first two problems and from roughly d / 3 to d / 4 for problem (iii). These results are all based on an efficient decomposition of a convex body using a hierarchy of Macbeath regions, and contrast to previous solutions that decomposed the space using quadtrees and grids. By further application of these techniques, we also show that it is possible to obtain near-optimal preprocessing time for the most efficient data structures for (iv) approximate nearest neighbor searching, (v) directional width queries, and (vi) polytope membership queries.
Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body K of diameter diam(K) is given in Euclidean d-dimensional space, where d is a constant. Given an error parameter ε > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from K is at most ε diam(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/ε^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is Õ(1/ε^{(d-1)/2}), where Õ conceals a polylogarithmic factor in 1/ε. This is a significant improvement upon the best known bound, which is roughly O(1/ε^{d-2}). Our result is based on a novel combination of both new and old ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of Bárány and Larman's economical cap covering, which may be of independent interest. Finally, we use a deterministic variation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.
@inproceedings{membership, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACM-SIAM Symposium on Discrete Algorithms (SODA 2017)}, title = {Optimal Approximate Polytope Membership}, pages = {270--288}, doi = {10.1137/1.9781611974782.18}, year = {2017}, }
In the polytope membership problem, a convex polytope K in d-dimensional space is given, and the objective is to preprocess K into a data structure so that, given any query point q, it is possible to determine efficiently whether q is inside K. We consider this problem in an approximate setting, and assume that d is a constant. Given an approximation parameter ε, the query can be answered either way if the distance from q to K's boundary is at most ε times K's diameter. Previous solutions to the problem were in the form of a space-time trade-off, where logarithmic query time demands O(1/ε^{d-1}) storage, whereas storage O(1/ε^{(d-1)/2}) admits roughly O(1/ε^{(d-1)/8}) query time. In this paper, we present a data structure that achieves logarithmic query time with storage of only O(1/ε^{(d-1)/2}), which matches the worst-case lower bound on the complexity of any ε-approximating polytope. Our data structure is based on a completely new technique, a hierarchy of ellipsoids defined as approximations to Macbeath regions. As an application, we obtain major improvements to approximate nearest neighbor searching. Notably, the storage needed to answer ε-approximate nearest neighbor queries for a set of n points in logarithmic time is reduced to O(n/ε^{d/2}). This halves the exponent in the ε-dependency of the existing space bound of roughly O(n/ε^{d}), which has stood for 15 years (Har-Peled, 2001).
In the polytope membership problem, a convex polytope K in d-dimensional space is given, and the objective is to preprocess K into a data structure so that, given any query point q, it is possible to determine efficiently whether q is inside K. We consider this problem in an approximate setting. Given an approximation parameter ε, the query can be answered either way if the distance from q to K's boundary is at most ε times K's diameter. We assume that the dimension d is fixed, and K is presented as the intersection of n halfspaces. Previous solutions to approximate polytope membership were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former is optimal in the worst-case with respect to space, and the latter is optimal with respect to query time. We present four main results. First, we show how to combine the two above techniques to obtain a simple space-time trade-off. Second, we present an algorithm that dramatically improves this trade-off. We do not know whether the bounds of our algorithm are tight, but our third result shows a lower bound to the space achieved as a function of the query time. Our fourth result shows that it is possible to reduce approximate Euclidean nearest neighbor searching to approximate polytope membership queries. Combined with the above results, this provides significant improvements to the best known space-time trade-offs for approximate nearest neighbor searching.
We consider the maximum (weight) independent set problem in unit disk graphs. The high complexity of the existing PTASs motivated the development of faster constant-approximation algorithms. In this article, we present a 2.16-approximation algorithm that runs in O(n log^{2} n) time and a 2-approximation algorithm that runs in O(n^{2} log n) time for the unweighted version of the problem. In the weighted versions, the running times increase by an O(log n) factor. Our algorithms are based on a classic strip decomposition, but we improve over previous algorithms by efficiently using geometric data structures. We also propose a polynomial-time approximation scheme for the unweighted version.
Numerous approximation algorithms for problems on unit disk graphs have been proposed in the literature, exhibiting a sharp trade-off between running times and approximation ratios. We introduce a variation of the known shifting strategy that allows us to obtain linear-time constant-factor approximation algorithms for such problems. To illustrate the applicability of the proposed variation, we obtain results for three well-known optimization problems. Among such results, the proposed method yields linear-time (4+ε)-approximations for the maximum-weight independent set and the minimum dominating set of unit disk graphs, thus bringing significant performance improvements when compared to previous algorithms that achieve the same approximation ratios. Finally, we use axis-aligned rectangles to illustrate that the same method may be used to derive linear-time approximations for problems on other geometric intersection graph classes.
@inproceedings{polytopecomp_socg, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {Symposium on Computational Geometry (SoCG 2016)}, title = {On the Combinatorial Complexity of Approximating Polytopes}, pages = {11:1--15}, doi = {10.4230/LIPIcs.SoCG.2016.11}, year = {2016}, }
Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body K of diameter diam(K) is given in Euclidean d-dimensional space, where d is a constant. Given an error parameter ε > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from K is at most ε diam(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/ε^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is Õ(1/ε^{(d-1)/2}), where Õ conceals a polylogarithmic factor in 1/ε. This is a significant improvement upon the best known bound, which is roughly O(1/ε^{d-2}). Our result is based on a novel combination of both new and old ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of Bárány and Larman's economical cap covering, which may be of independent interest. Finally, we use a deterministic variation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.
@article{perfection, author = {Vital Brazil, Emilio and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and Sasaki, Diana}, title = {The Cost of Perfection for Matchings in Graphs}, pages = {112--122}, doi = {doi:10.1016/j.dam.2014.12.006}, journal = {Discrete Applied Mathematics}, volume = {210}, year = {2016}, }
Perfect matchings and maximum weight matchings are two fundamental combinatorial structures. We consider the ratio between the maximum weight of a perfect matching and the maximum weight of a general matching. Motivated by the computer graphics application in triangle meshes, where we seek to convert a triangulation into a quadrangulation by merging pairs of adjacent triangles, we focus mainly on bridgeless cubic graphs. First, we characterize graphs that attain the extreme ratios. Second, we present a lower bound for all bridgeless cubic graphs. Third, we present upper bounds for subclasses of bridgeless cubic graphs, most of which are shown to be tight. Additionally, we present tight bounds for the class of regular bipartite graphs.
@article{eta_grids, author = {da Fonseca, Guilherme D. and Ries, Bernard and Sasaki, Diana}, title = {On the Ratio between Maximum Weight Perfect Matchings and Maximum Weight Matchings in Grids}, pages = {45--55}, doi = {10.1016/j.dam.2016.02.017}, journal = {Discrete Applied Mathematics}, volume = {207}, year = {2016}, }
Given a graph G that admits a perfect matching, we investigate the parameter η(G) (originally motivated by computer graphics applications) which is defined as follows. Among all nonnegative edge weight assignments, η(G) is the minimum ratio between (i) the maximum weight of a perfect matching and (ii) the maximum weight of a general matching. In this paper, we determine the exact value of η for all rectangular grids, all bipartite cylindrical grids, and all bipartite toroidal grids. We introduce several new techniques to this endeavor.
@inproceedings{udg_linear_conf, author = {da Fonseca, Guilherme D. and de S\'{a}, Vin\'{i}cius G. P. and de Figueiredo, Celina M. H.}, booktitle = {Workshop on Approximation and Online Algorithms (WAOA 2014)}, title = {Linear-Time Approximation Algorithms for Unit Disk Graphs}, pages = {132--143}, doi = {10.1007/978-3-319-18263-6_12}, series = {Lecture Notes in Computer Science}, volume = {8952}, year = {2015}, }
Numerous approximation algorithms for unit disk graphs have been proposed in the literature, exhibiting sharp trade-offs between running times and approximation ratios. We propose a method to obtain linear-time approximation algorithms for unit disk graph problems. Our method yields linear-time (4+ε)-approximations to the maximum-weight independent set and the minimum dominating set, bringing dramatic performance improvements when compared to previous algorithms that achieve the same approximation factors. Furthermore, we present an alternative linear-time approximation scheme for the minimum vertex cover, which could be obtained by an indirect application of our method.
@article{udg_recog, author = {da Fonseca, Guilherme D. and de S\'{a}, Vin\'{i}cius G. P. and Machado, Raphael and de Figueiredo, Celina M. H. }, title = {On the Recognition of Unit Disk Graphs and the Distance Geometry Problem with Ranges}, pages = {3--19}, doi = {10.1016/j.dam.2014.08.014}, journal = {Discrete Applied Mathematics}, volume = {197}, year = {2015}, }
We introduce a method to decide whether a graph G admits a realization on the plane in which two vertices lie within unitary distance from one another exactly if they are neighbors in G. Such graphs are called unit disk graphs, and their recognition is a known NP-hard problem. By iteratively discretizing the plane, we build a sequence of geometrically defined trigraphs–graphs with mandatory, forbidden and optional adjacencies–until either we find a realization of G or the interruption of such a sequence certifies that no realization exists. Additionally, we consider the proposed method in the scope of the more general Distance Geometry Problem with Ranges, where arbitrary intervals of pairwise distances are allowed.
@article{domudg, author = {da Fonseca, Guilherme D. and de Figueiredo, Celina M. H. and de S\'{a}, Vin\'{i}cius G. P. and Machado, Raphael}, title = {Efficient Sub-5 Approximations for Minimum Dominating Sets in Unit Disk Graphs}, pages = {70--81}, doi = {10.1016/j.tcs.2014.01.023}, journal = {Theoretical Computer Science}, volume = {540--541}, number = {26}, year = {2014}, }
A unit disk graph is the intersection graph of n congruent disks in the plane. Dominating sets in unit disk graphs are widely studied due to their application in wireless ad-hoc networks. Because the minimum dominating set problem for unit disk graphs is NP-hard, numerous approximation algorithms have been proposed in the literature, including some PTASs. However, since the proposal of a linear-time 5-approximation algorithm in 1995, the lack of efficient algorithms attaining better approximation factors has aroused attention. We introduce a linear-time O(n+m) approximation algorithm that takes the usual adjacency representation of the graph as input and outputs a 44/9-approximation. This approximation factor is also attained by a second algorithm, which takes the geometric representation of the graph as input and runs in O(n log n) time regardless of the number of edges. Additionally, we propose a 43/9-approximation which can be obtained in O(n^{2} m) time given only the graph's adjacency representation. It is noteworthy that the dominating sets obtained by our algorithms are also independent sets.
@inproceedings{domudg_conf, author = {da Fonseca, Guilherme D. and de Figueiredo, Celina M. H. and de S\'{a}, Vin\'{i}cius G. P. and Machado, Raphael}, booktitle = {Workshop on Approximation and Online Algorithms (WAOA 2012)}, title = {Linear Time Approximation for Dominating Sets and Independent Dominating Sets in Unit Disk Graphs}, pages = {82--92}, doi = {10.1007/978-3-642-38016-7_8}, series = {Lecture Notes in Computer Science}, volume = {7846}, year = {2013}, }
A unit disk graph is the intersection graph of n congruent disks in the plane. Dominating sets in unit disk graphs are widely studied due to their application in wireless ad-hoc networks. Since the minimum dominating set problem for unit disk graphs is NP-hard, several approximation algorithms with different merits have been proposed in the literature. On one extreme, there is a linear time 5-approximation algorithm. On another extreme, there are two PTAS whose running times are polynomials of very high degree. We introduce a linear time approximation algorithm that takes the usual adjacency representation of the graph as input and attains a 44/9 approximation factor. This approximation factor is also attained by a second algorithm we present, which takes the geometric representation of the graph as input and runs in O(n log n) time, regardless of the number of edges. The analysis of the approximation factor of the algorithms, both of which are based on local improvements, exploits an assortment of results from discrete geometry to prove that certain graphs cannot be unit disk graphs. It is noteworthy that the dominating sets obtained by our algorithms are also independent sets.
@inproceedings{polytopeapx_socg, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACM Symposium on Computational Geometry (SoCG 2012)}, title = {Optimal Area-Sensitive Bounds for Polytope Approximation}, pages = {363--372}, doi = {10.1145/2261250.2261305}, year = {2012}, }
Approximating convex bodies is a fundamental question in geometry and has applications to a wide variety of optimization problems. Given a convex body K in R^{d} for fixed d, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error ε. The best known uniform bound, due to Dudley (1974), shows that O((diam(K)/ε)^{(d-1)/2}) facets suffice. While this bound is optimal in the case of a Euclidean ball, it is far from optimal for skinny convex bodies. We show that, under the assumption that the width of the body in any direction is at least ε, it is possible to approximate a convex body using O((area(K)/ε^{d-1})^{1/2}) facets. This bound is never worse than the previous bound and may be significantly better for skinny bodies. This bound is provably optimal in the worst case and improves upon our earlier result (which appeared in SODA 2012). Our improved bound arises from a novel approach to sampling points on the boundary of a convex body in order to stab all (dual) caps of a given width. This approach involves the application of an elegant concept from the theory of convex bodies, called Macbeath regions. While Macbeath regions are defined in terms of volume considerations, we show that by applying them to both the original body and its dual, and then combining this with known bounds on the Mahler volume, it is possible to achieve the desired width-based sampling.
@inproceedings{mahler_soda, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACM-SIAM Symposium on Discrete Algorithms (SODA 2012)}, title = {Polytope Approximation and the Mahler Volume}, pages = {29--42}, doi = {10.1137/1.9781611973099.3}, year = {2012}, }
The problem of approximating convex bodies by polytopes is an important and well studied problem. Given a convex body K in R^{d}, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error ε. Dudley (1974) and Bronshteyn and Ivanov (1976) show that in spaces of fixed dimension, O((diam(K)/ε)^{(d-1)/2}) vertices (alt., facets) suffice. In our first result, under the assumption that the width of the body is at least ε, we strengthen the above bound to Õ(√area(K)/ε^{(d-1)/2}). This is never worse than the previous bound (by more than logarithmic factors) and may be significantly better for skinny bodies. Our analysis exploits an interesting analogy with a classical concept from the theory of convexity, called the Mahler volume. This is a dimensionless quantity that involves the product of the volumes of a convex body and its polar dual. In our second result, we apply the same machinery to improve upon the best known bounds for answering ε-approximate polytope membership queries. Given a convex polytope P defined as the intersection of halfspaces, such a query determines whether a query point q lies inside or outside P, but may return either answer if q's distance from P's boundary is at most ε. We show that, without increasing storage, it is possible to dramatically reduce the best known search times for ε-approximate polytope membership. This further implies improvements to the best known search times for approximate nearest neighbor searching in spaces of fixed dimension.
@inproceedings{polytope_conf, doi = {10.1145/1993636.1993713}, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACM Symposium on Theory of Computing (STOC 2011)}, title = {Approximate Polytope Membership Queries}, pages = {579--586}, year = {2011}, }
We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in R^{d}, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an allowed error ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, the latter yields constant query time, and a space-time tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε^{(d-1)/2}) to O(1/ε^{(d-1)/4}). Our approach is based on a very simple construction algorithm, whose analysis is surprisingly nontrivial. Both lower bounds and upper bounds on the performance of the algorithm are presented. To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. Remarkably, we show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.
@article{flats, author = {da Fonseca, Guilherme D.}, title = {Fitting Flats to Points with Outliers}, doi = {10.1142/S0218195911003809}, journal = {International Journal of Computational Geometry and Applications}, number = {5}, volume = {21}, pages = {559--569}, year = {2011}, }
Determining the best shape to fit a set of points is a fundamental problem in many areas of computer science. We present an algorithm to approximate the k-flat that best fits a set of n points with n - m outliers. This problem generalizes the smallest m-enclosing ball, infinite cylinder, and slab. Our algorithm gives an arbitrary constant factor approximation in O(n^{k+2}/m) time, regardless of the dimension of the point set. While our upper bound nearly matches the lower bound, the algorithm may not be feasible for large values of k. Fortunately, for some practical sets of inliers, we reduce the running time to O(n^{k+2}/m^{k+1}), which is linear when m = Ω(n).
@article{pgrid, author = {de S\'{a}, Vin\'{i}cius G. P. and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and Machado, Raphael}, doi = {10.1016/j.tcs.2011.01.018}, journal = {Theoretical Computer Science}, title = {Complexity Dichotomy on Partial Grid Recognition}, number = {22}, volume = {412}, pages = {2370--2379}, year = {2011}, }
Deciding whether a graph can be embedded in a grid using only unit-length edges is NP-complete, even when restricted to binary trees. However, it is not difficult to devise a number of graph classes for which the problem is polynomial, even trivial. A natural step, outstanding thus far, was to provide a broad classiﬁcation of graphs that make for polynomial or NP-complete instances. We provide such a classiﬁcation based on the set of allowed vertex degrees in the input graphs, yielding a full dichotomy on the complexity of the problem. As byproducts, the previous NP-completeness result for binary trees was strengthened to strictly binary trees, and the three-dimensional version of the problem was for the ﬁrst time proven to be NP-complete. Our results were made possible by introducing the concepts of consistent orientations and robust gadgets, and by showing how the former allows NP-completeness proofs by local replacement even in the absence of the latter.
@inproceedings{proximity, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {European Symposium on Algorithms (ESA 2010)}, series = {Lecture Notes in Computer Science}, volume = {6346}, pages = {374--385}, title = {A Unified Approach to Approximate Proximity Searching}, doi = {10.1007/978-3-642-15775-2_32}, year = {2010}, }
The inability to answer proximity queries efficiently for spaces of dimension d > 2 has led to the study of approximation to proximity problems. Several techniques have been proposed to address different approximate proximity problems. In this paper, we present a new and unified approach to proximity searching, which provides efficient solutions for several problems: spherical range queries, idempotent spherical range queries, spherical emptiness queries, and nearest neighbor queries. In contrast to previous data structures, our approach is simple and easy to analyze, providing a clear picture of how to exploit the particular characteristics of each of these problems. As applications of our approach, we provide simple and practical data structures that match the best previous results up to logarithmic factors, as well as advanced data structures that improve over the best previous results for all aforementioned proximity problems.
@inproceedings{vlsi_isco, author = {de S\'{a}, Vin\'{i}cius G. P. and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and Machado, Raphael}, booktitle = {International Symposium on Combinatorial Optimization}, series = {Electronic Notes in Discrete Mathematics}, title = {Complexity Dichotomy on Degree-Cconstrained VLSI Layouts with Unit-Length Edges}, volume = {36}, pages = {391--398}, doi = {10.1016/j.endm.2010.05.050}, year = {2010}, }
Deciding whether an arbitrary graph admits a VLSI layout with unit-length edges is NP-complete, even when restricted to binary trees. However, for certain graphs, the problem is polynomial or even trivial. A natural step, outstanding thus far, was to provide a broader classiﬁcation of graphs that make for polynomial or NP-complete instances. We provide such a classiﬁcation based on the set of vertex degrees in the input graphs, yielding a comprehensive dichotomy on the complexity of the problem, with and without the restriction to trees.
@article{absolute, author = {da Fonseca, Guilherme D. and Mount, David M.}, doi = {10.1016/j.comgeo.2008.09.009}, issn = {09257721}, journal = {Computational Geometry}, number = {4}, volume = {43}, pages = {434--444}, title = {Approximate range searching: The absolute model}, year = {2010}, }
Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter ε > 0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance ε diam(R) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance ε of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axis-aligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation.
@article{enclosing, author = {de Figueiredo, Celina M. H. and da Fonseca, Guilherme D.}, doi = {10.1016/j.ipl.2009.09.001}, issn = {00200190}, journal = {Information Processing Letters}, number = {21-22}, pages = {1216--1221}, title = {Enclosing weighted points with an almost-unit ball}, volume = {109}, year = {2009}, }
Given a set of n points with positive real weights in d-dimensional space, we consider an approximation to the problem of placing a unit ball, in order to maximize the sum of the weights of the points enclosed by the ball. Given an approximation parameter ε < 1, we present an O(n/ε^{d-1}) expected time algorithm that determines a ball of radius 1 + ε enclosing a weight at least as large as the weight of the optimal unit ball. This is the first approximate algorithm for the weighted version of the problem in d-dimensional space. We also present a matching lower bound for a certain class of algorithms for the problem.
@article{odd, author = {Bueno, Let\'{\i}cia R. and Faria, Luerbio and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D.}, journal = {Applicable Analysis and Discrete Mathematics}, number = {2}, pages = {386--394}, title = {Hamiltonian Paths in Odd Graphs}, doi = {10.2298/AADM0902386B}, volume = {3}, year = {2009}, }
Lovász conjectured that every connected vertex-transitive graph has a Hamiltonian path. The odd graphs O_{k} form a well-studied family of connected, k-regular, vertex-transitive graphs. It was previously known that O_{k} has Hamiltonian paths for k ≤ 14. A direct computation of Hamiltonian paths in O_{k} is not feasible for large values of k, because O_{k} has binom(2k-1,k-1) vertices and kbinom(2k-1,k-1)/2 edges. We show that O_{k} has Hamiltonian paths for 15 ≤ k ≤ 18. We do so without running any heuristics. Instead, we use existing results on the middle levels problem, therefore relating these two fundamental problems. We show that further improved results for the middle levels problem can be used to find Hamiltonian paths in O_{k} for larger values of k.
@inproceedings{tradeoffs-sibgrapi, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {21st Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI '08). }, doi = {10.1109/SIBGRAPI.2008.24}, pages = {237--244}, title = {Tradeoffs in Approximate Range Searching Made Simpler}, year = {2008}, }
Range searching is a fundamental problem in computational geometry. The problem involves preprocessing a set of n points in d-dimensional space into a data structure, so that it is possible to determine the subset of points lying within a given query range. In approximate range searching, a parameter ε > 0 is given, and for a given query range R the points lying within distance ε diam(R) of the range's boundary may be counted or not. In this paper we present three results related to the issue of tradeoffs in approximate range searching. First, we introduce the range sketching problem. Next, we present a space-time tradeoff for smooth convex ranges, which generalize spherical ranges. Finally, we show how to modify the previous data structure to obtain a space-time tradeoff for simplex ranges. In contrast to existing results, which are based on relatively complex data structures, all three of our results are based on simple, practical data structures.
@inproceedings{absolute-wads, author = {da Fonseca, Guilherme D.}, booktitle = {Algorithms and Data Structures (WADS 2007)}, chapter = {2}, doi = {10.1007/978-3-540-73951-7\_2}, issn = {0302-9743}, journal = {Algorithms and Data Structures}, pages = {2--14}, series = {Lecture Notes in Computer Science}, title = {Approximate Range Searching: The Absolute Model}, volume = {4619}, year = {2007}, }
Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter ε > 0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance ε diam(R) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance ε of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axis-aligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation.
@article{hssp, author = {de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and de S\'{a}, Vinicius G. P. and Spinrad, Jeremy}, booktitle = {Algorithmica}, doi = {10.1007/s00453-005-1198-2}, issn = {0178-4617}, journal = {Algorithmica}, number = {2}, pages = {149--180}, title = {Algorithms for the Homogeneous Set Sandwich Problem}, volume = {46}, year = {2006}, }
A homogeneous set is a non-trivial module of a graph, i.e. a non-empty, non-unitary, proper subset of a graph's vertices such that all its elements present exactly the same outer neighborhood. Given two graphs G_{1}(V,E_{1}), G_{2}(V,E_{2}), the Homogeneous Set Sandwich Problem (HSSP) asks whether there exists a sandwich graph G_{S}(V,E_{S}), E_{1}⊆E_{S}⊆E_{2}, which has a homogeneous set. In 2001, Tang et al. published an all-fast O(n^{2}Δ_{2}) algorithm which was recently proven wrong, so that the HSSP's known upper bound would have been reset thereafter at former O(n^{4}) determined by Cerioli et al. in 1998. We present, notwithstanding, new deterministic algorithms which have it established at O(n^{3} log m/n). We give as well two even faster O(n^{3}) randomized algorithms, whose simplicity might lend them didactic usefulness. We believe that, besides providing efficient easy-to-implement procedures to solve it, the study of these new approaches allows a fairly thorough understanding of the problem.
@inproceedings{hssp-wea, author = {de Figueiredo, Celina M. and da Fonseca, Guilherme D. and de S\'{a}, Vin\'{\i}cius G. and Spinrad, Jeremy}, booktitle = {Experimental and Efficient Algorithms}, pages = {243--252}, title = {Faster Deterministic and Randomized Algorithms on the Homogeneous Set Sandwich Problem}, year = {2004} }
A homogeneous set is a non-trivial, proper subset of a graph's vertices such that all its elements present exactly the same outer neighborhood. Given two graphs, G_{1}(V,E_{1}), G_{2}(V,E_{2}), we consider the problem of finding a sandwich graph G_{S}(V,E_{S}), with E_{1}⊆E_{S}⊆E_{2}, which contains a homogeneous set, in case such a graph exists. This is called the Homogeneous Set Sandwich Problem (HSSP). We give an O(n^{3.5}) deterministic algorithm, which updates the known upper bounds for this problem, and an O(n^{3}) Monte Carlo algorithm as well. Both algorithms, which share the same underlying idea, are quite easy to be implemented on the computer.
@article{hanger, author = {da Fonseca, Guilherme D. and de Figueiredo, Celina M. H. and Carvalho, Paulo C. P.}, doi = {10.1016/j.ipl.2003.10.010}, issn = {00200190}, journal = {Information Processing Letters}, number = {3}, pages = {151--157}, title = {Kinetic hanger}, volume = {89}, year = {2004}, }
A kinetic priority queue is a kinetic data structure which determines the largest element in a collection of continuously changing numbers subject to insertions and deletions. Due to its importance, many different constructions have been suggested in the literature, each with its pros and cons. We propose a simple construction that takes advantage of randomization to achieve optimal locality and the same time complexity as most other efficient structures.
@article{sm_restricted, author = {Dias, V\^{a}nia M. F. and da Fonseca, Guilherme D. and de Figueiredo, Celina M. H. and Szwarcfiter, Jayme L.}, doi = {10.1016/S0304-3975(03)00319-0}, issn = {03043975}, journal = {Theoretical Computer Science}, number = {1-3}, pages = {391--405}, title = {The stable marriage problem with restricted pairs}, volume = {306}, year = {2003}, }
A stable matching is a complete matching of men and women such that no man and woman who are not partners both prefer each other to their actual partners under the matching. In an instance of the stable marriage problem, each of the n men and n women ranks the members of the opposite sex in order of preference. It is well known that at least one stable matching exists for every stable marriage problem instance. We consider extensions of the stable marriage problem obtained by forcing and by forbidding sets of pairs. We present a characterization for the existence of a solution for the stable marriage with forced and forbidden pairs problem. In addition, we describe a reduction of the stable marriage with forced and forbidden pairs problem to the stable marriage with forbidden pairs problem. Finally, we also present algorithms for finding a stable matching, all stable pairs and all stable matchings for this extension. The complexities of the proposed algorithms are the same as the best known algorithms for the unrestricted version of the problem.
@article{kinetic_heap, author = {da Fonseca, Guilherme D. and de Figueiredo, Celina M. H.}, doi = {10.1016/S0020-0190(02)00366-6}, issn = {00200190}, journal = {Information Processing Letters}, number = {3}, pages = {165--169}, title = {Kinetic heap-ordered trees: Tight analysis and improved algorithms}, volume = {85}, year = {2003}, }
The most natural kinetic data structure for maintaining the maximum of a collection of continuously changing numbers is the kinetic heap. Basch, Guibas, and Ramkumar proved that the maximum number of events processed by a kinetic heap with n numbers changing as linear functions of time is O(n log^{2} n) and Ω(n log n). We prove that this number is actually Θ(n log n). In the kinetic heap, a linear number of events are stored in a priority queue, consequently, it takes O(log n) time to determine the next event at each iteration. We also present a modified version of the kinetic heap that processes O(n log n / log log n) events, with the same O(log n) time complexity to determine the next event.