Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 829 (2004)
(USC DC Other)
USC Computer Science Technical Reports, no. 829 (2004)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Occlusion Culling Using Prioritized Visibility Queries
Kelvin Chung, Suya You, and Ulrich Neumann
Integrated Media Systems Center
Computer Science Department
University of Southern California
{tatchung | suya | uneumann}@usc.edu
Abstract
Part of the latest research efforts in occlusion culling
focus on using hardware visibility query. In this paper,
we present a novel method to prioritize the visibility query
to work effectively together with an occlusion culling
approach. We demonstrate that an addition of query
priority to a simple front-to-back slab visibility
processing can greatly reduce the number of unfruitful
visibility query in occlusion culling computations. We
also demonstrate how to make use of frame coherence in
both cases of visibility query pass/fail to enhance the
effectiveness of visibility culling technique. Combining
with a typical occlusion culling method, we have applied
our approach to several scenes with different structure
complexes and the results have shown the effectiveness of
the approach in minimizing the amount of non-essential
visibility queries, hence improve overall performance of
rendering system.
1. Introduction
Rendering large-scale complex environment in an
interactive rate is a challenging topic. Techniques from
LOD (Level-of-Detail), view-dependent mesh
simplification [22], occlusion culling [4, 23], imposter
[24], and hybrid methods are commonly used to tackle the
problem.
Typically, occlusion culling is an effective technique,
culling away geometries inside view frustum that are not
visible from current viewpoint, which is support by
current graphics hardware. Typical algorithms include
From-region visibility and From-point visibility. The
From-region visibility algorithms compute Potential-
Visible-Set (PVS) for a region which is good as long as
viewpoint is within the region. The time to compute a
solution usually depends on scene complexity and view
cell size, therefore speed limitation is inherently imposed
on the viewers. Dedicated hardware visibility server [9,
10, 11] can be deployed in order to computes the PVS for
a region at runtime in parallel with the rendering system
to achieve an interactive rate. On the other hand, From-
point visibility algorithms compute visibility for a single
viewpoint for each frame. It is generally considered to be
simpler with a tighter PVS. Well known algorithms exist
to handle both From-point [2, 3, 4], and From-region
visibility computations [5, 6, 7, 8].
Another type of classification for the occlusion culling
technique is based on whether the computation of PVS
uses object-space or image-space hierarchies. The object-
space approaches form hierarchies of data structure for
visibility query, for example, occlusion tree [4]. It is
however difficult to capture complicated occluder fusion,
the fact that small occluders in the front combined
together can occlude object at the back that cannot be
occluded by any one of them alone. Nevertheless, the
solution of the object-space approach is independent of
rendering image resolution. On the other hand, the
image-based approaches build hierarchies of image maps
to perform queries, which can successfully capture the
occluder fusion easily. Examples are hierarchical Z-
buffer [2], and hierarchical occlusion maps [3]. These
approaches may select certain number of large occluders
to form the hierarchical data structure for query. As a
result, the selection of large occluders is a key to achieve
visibility culling effectively, though it is possible that no
single large occluder exists. Recent trend in interactive
rendering addresses the image-based approaches to
capture occluder fusion without the need to select large
occluders.
In this paper, we present a novel approach for prioritizing
visibility query. We demonstrate that an addition of
query priority to a simple front-to-back slab visibility
processing can greatly reduce the number of unfruitful
visibility query in the occlusion culling computations.
We also show how to make use of frame coherence in
both cases of visibility query pass/fail to enhance the
effectiveness of visibility culling technique. Combining
with a typical occlusion culling method [1], we have
applied our approach to several complex models and the
results have shown the effectiveness of the approach in
minimizing the amount of non-essential visibility query.
The remainder of this paper is organized as follows:
Section 2 reviews related work. Section 3 provides an
overview of the prioritized query based occlusion culling
approach. Section 4 details the prioritized visibility
procedure, and its implementation is described in Section
5. The experimental result is presented in Section 6.
Finally, Section 7 concludes our work and future
directions.
2. Related Work
Image-based occlusion query has been a common feature
in current graphics hardware, and an official part of
specification in OpenGL 1.5 [14]. The basic idea of the
image-based occlusion query is to draw a bounding box
of complex model first with the depth and color buffers
disable. A hardware visibility query is then made to test
whether any pixels pass through the fragment test.
Geometries that are completely contained inside the
boundary can skip rendering if the bounding box is
invisible. The latest visibility query in OpenGL 1.5 and
GL_NV_occlusion_query extension support
asynchronous occlusion test, which can pass several
bounding boxes and do query later at once. This new
feature alleviates pipeline stalling as happen in previous
HP occlusion culling extensions that render a bounding
box and do query one at a time. Latest OpenGL API can
also return the number of pixels passed the fragment test,
instead of just a Boolean value. We will show how to
make use of this information later in the paper. Hardware
visibility query thus provides a simple and effective way
to capture occlusion fusion and perform occlusion culling.
Several algorithms have been proposed using hardware
visibility queries [1, 12, 15, 16, 17, 18].
In [15], scene is subdivided into a sloppy n-ray Space
Partitioning tree, which is similar to other bounding
volume representations but does not have a fixed number
of child nodes. Bounding volumes of tree nodes of the
same tree level are not necessarily disjointed. This avoids
splitting when triangles extend in two bounding regions.
After culling view frustum on the tree nodes, the
bounding boxes and inside geometries are tested and
rendered in depth sort order using visibility queries.
However, the visibility query is not as effective as that
suggested in [1] doing the actual front-to-back order for
one slab at a time. There is also a problem of pipeline
stalling for each query which applying asynchronous
occlusion test is not straightforward as done in [1].
In [12], A PC cluster (3 PCs) is used to achieve good
performance in rendering ten of millions of triangles.
One machine is used for hardware culling, one for
rendering occluders, and the third one for displaying
geometry. The first 2 machines will switch role for every
frame, in term of “Occlusion-Switch”. Depth information
from previous frame is used as occluders for testing the
bounding box’s hierarchy in current frame. So, there is a
frame delay that may result in missing geometry for a
frame if viewpoint moves quickly. In practice, however,
this is rarely noticeable, and it seems a good tradeoff for
performance.
In [16, 17], a Prioritized-Layer Projection algorithm is
suggested to initialize priority using the number of
triangles occurred in cell. The space is roughly divided to
keep uniform density of primitives per cell. In the first
phase, the priority is roughly assigned and geometry is
rendered. The priority is computed using the front cell’s
priority scaled by dot-product of the viewing direction
and the normal of plane sharing cells. The result is added
to the back cell’s priority. In the second phase, visibility
query is used to check the boundary boxes of the
remaining low priority cell. However, the using of
triangle number as criteria may not accurately reflect
opacity, for example, it is possible that one triangle can
occupy the whole cell. Besides, the approach only selects
a certain number of high priority cells as occluders, and
the cells are not rendered in a straight front-to-back order.
Those drawbacks reduce the effectiveness of the visibility
query.
In [1], the scene is subdivided into uniform cells and the
visibility query is done in front-to-back slab order, which
is similar to the volume rendering technique. This
strategy is effective to identify occlusions using hardware
visibility query. The asynchronous occlusion test is also
used to alleviate the pipeline stalling by querying all the
bounding boxes in a slab at once. Since the number of
visibility query is usually very high, using
indiscriminately of those queries is waste if lots of cells
are actually visible. In the next section, we will describe
our approach to tackle this problem by proposing a
prioritized query technique.
3. Occlusion Culling Using Prioritized
Visibility Queries
3.1. Front-to-back occlusion culling
Originated from [1], the model geometry is first sorted
into uniform cells based on a spatial subdivision using
visibility test in a front-to-back ordering from viewpoint
(Figure 1). Then for each slab, multiple bounding boxes
are sent to the pipeline to do synchronous visibility query.
The bounding box is used as geometry of occlusion query
to see whether any fragment passes the depth test. If the
visibility query detects a bounding box that is invisible,
all triangles inside the box are not rendered. Nested grid
decomposition is used for the cells having high polygons,
so the effectiveness is highly dependent on triangle
distribution. The triangle occupying more than one cell is
inserted into each cell in order to avoid splitting it into
many small triangles. A rendering counter is used for
each shared triangle to avoid rendering it twice. A
triangle list that contains the triangles being completely
inside a cell can be stored as a display list for rendering
and/or as a vertex array for processing.
3.2. Prioritized visibility queries
Our visibility prioritizing scheme is built on top of the
above occlusion culling algorithm. A pre-processing step
first processes the cell’s opacity value in fixed
orientations, and then accumulates the priorities in slab’s
processing order. An efficient priority computation
method is carefully designed to avoid runtime overhead of
overall rendering system. The method combines the
priorities of neighboring cells efficiently and also takes
into account the outcome of visibility query and frame
coherence. We exploit frame-to-frame coherence in both
cases of visibility query pass/fail to enhance effectiveness
of the visibility prioritizing scheme. Non-uniform cell
size is used to adapt the system to outdoor environment.
A feedback mechanism from previous hardware visibility
queries is proposed to improve the accuracy of query
prediction in current frame. If a cell is determined to be
invisible, its priority is adjusted automatically. By
combining the visibility prioritizing scheme with the
occlusion culling method, we are able to success in
minimizing the amount of non-essential visibility queries;
hence improve overall performance of occlusion culling
system.
4. Priority Computation
4.1. Pre-processing
Each cell is preprocessed to compute its opacity value in
13 view directions (3 faces, 6 edges, and 4 corners) using
orthogonal projection. The sample size set to 128x128.
Triangles are rendered as white color on black
background. The number of visible fragments is counted
as non-zero pixels returned from OpenGL function -
glReadPixels(). This value is then normalized to [0, 1]
with the sample size. Note that, unless special clipping
planes are used for the edge and corner projections (4 in
edge view and 8 in corner view), the projection equation
of opacity value can be overestimated due to triangle
sharing. Another case is the equation’s underestimate for
corner projection, since a solid cube will be projected to a
hexagon. To avoid this, we should normalize the number
of non-zero pixels with the size of hexagon instead of the
sample size.
4.2. Priority initialization
Initially, the priority of eye cell is assigned to one of the
pre-computed opacity values. We use dot-product to find
which direction of the 13 pre-computed directions is the
closest to the view direction. Although this method can
overestimate the opacity value if geometry in the eye cell
is behind the eye, we found the result is quite acceptable
to overestimate rather than underestimate. Another
strategy is to read back the pixel colors from fragment
buffer and compare them with background color in eye
cell. This can be quickly implemented by first obtaining
the number of fragment pass in depth query, and then
normalizing the result. Note that unless the triangles are
rendered in front-to-back order in the eye cell, the result
may be larger than 1. Obviously, the priority should be
set to zero for cells outside the view frustum.
4.3 Priority accumulation
For each cell A, its priority P is computed using its pre-
processed opacity value P
c
and accumulated neighbor
priority P
n
. Since the priority will be computed for each
cell (including empty cell) in every frame (ignoring frame
coherence for now), the overhead would be large if we
simply use the dot-product to find which of the 13 pre-
computed directions is the closest to view direction. So,
as an approximation, we divide the space into 27 non-
uniform regions with center at region A (Figure 2). The
region B in which the viewpoint falls can be found with 3
to 6 comparisons (1 to 2 comparisons in each dimension).
Let Pc be chosen as the pre-computed direction of BA,
and P
n
set to the priority of B. To find P we use a
heuristic search that cell A contributes the remaining
transparent area of B proportionally, i.e. P
c
x (1 – P
n
).
Combining it with the original opacity value in B, the
accumulated priority at A is therefore computed by
Figure 1 - Occlusion culling using front-to-back visibility
queries: first slab is tested and deemed to be visible
(blue), and some of second slabs (gray) could be invisible.
P = P
c
+ P
n
– P
c
x P
n
(1)
This formula guarantees that P is non-decreasing and falls
within the range [0, 1]. The estimated priority is in turn
propagated to its neighbor cells when next slab is
rendered. In the case of orthogonal projection, the
priorities of cells in the first slab (except eye cell) may not
be initialized. We can use the frame coherence to
initialize them with the last computed priorities.
4.4. Frame coherence
To exploit frame-to-frame coherence, the priority is
refined adaptively at each frame if the visibility query
returns invisible cell. This is done by setting the cell’s
priority to 1. This strategy avoids the process of re-
computing its priority for next frame, but the priority
value can still be propagated to its neighbor in the next
frame, resulting in more visibility queries and potentially
finding more invisible cells. Essentially, any found
invisible cell is automatically ‘locked’ with priority 1 to
do the visibility query repeatedly until it becomes visible.
The above method handles the frame coherence when the
visibility query determines a cell is invisible. On the
other hand, when cell is visible, we can also make use of
frame-to-frame coherence by not doing visibility query
for certain number of frames (F
skip
). Suppose that at time
T
i
, the number of fragment pass returned by visibility
query is C
i
(Figure 3). After j-i frames, the cell (purple
color in Figure 3) still keeps visible but now has fragment
pass C
j
. Note that the fragment pass returned by visibility
query may not equal to actual pixel displayed on the
screen, unless the geometries are rendered from front-to-
back. However, it is still a good estimate can be useful
without computation overhead.
The number of fragment change per frame is equal to | C
j
- C
i
| / (T
j
– T
i
) for current viewpoint movement. Hence,
it is predictable that current visible fragment C
j
may
become invisible after C
j
x (T
j
– T
i
) / | C
j
- C
i
| frames.
When a fragment does not change in any frame, we set a
maximum frame penalty F
penalty
so that
F
skip(j)
= min (C
j
x (T
j
– T
i
) / | C
j
- C
i
| , F
penalty
) (2)
To use frame coherence, two timestamps are used for
each cell: one for recording the timing information of
when the priority is computed in last time, and one for
storing the visible cell returned from visibility query in
last time.
4.5. Visibility query criteria
A set of criterions to perform visibility query for a cell is
defined as:
(i) The cell’s priority is greater than a predefined
minimum priority.
(ii) The number of triangles in the cell is greater than a
predefined threshold.
(iii) Current Timestamp T
j
> T
i
+ F
skip(j)
For criteria (ii), ideally we can skip the visibility query if
the time to render triangles associated with it is small.
However, in practice, it is usually hard to predict precise
rendering time. A cell may intersect with a few but very
large triangles that may have higher fill-rate requirements.
The negligible performance differences among the
threshold values ranges from 1 to 50 [1]. Nevertheless,
our priority computation scheme above has already taken
into account of those observations. If a very large triangle
intersects with a cell, its pre-computed priority based on
projected opacity is likely to be high in viewing direction.
So we can simply set the triangle threshold to be 1.
To capture precisely all the occluded cells by using those
visibility criterions, we should allow the low priority’s
cells to have a chance doing the visibility query also. A
random number between 0 and 1 is generated, and the
condition (i) is passed if its value is lower than the cell’s
priority. The cells having very small priorities (including
zero) are set to a small value (e.g. 0.05), and the cells
having high priority values (> 0.9) can skip this step.
Hence, the higher priority cells will have higher
probability of getting visibility queries, while the lower
priority cells also have chances to do the visibility query.
Once a cell is determined to be invisible, it is
automatically ‘locked’ with priority 1 to do the visibility
query repeatedly until it becomes visible. In theory, all
the invisible cells can be determined in a given sufficient
time for stationary view.
Figure 2 - The scene is divided into 27 regions by cell A.
The priority of cell that falls into eye point region is
used to compute cell A’s priority. In this case cell B is
chosen to combine with cell A’s pre-processed priority.
A
B
4.6 Approximate culling
When a bounding box is projected to the screen for only a
few pixels, it is usually not noticeable. It makes sense
that we can skip those small cells to improve rendering
speed. Therefore, we do an approximate culling by
ignoring rendering those cells if the number of fragment
returned from visibility query is less than a threshold.
5. Implementation
In our implementation, we use 27 pointers for each cell to
index its neighbors and itself. So we can quickly locate a
cell for priority accumulation phase instead of computing
the index and lookup from 3D cell array. Besides, a
dummy empty cell is created so that all boundary cells
can point to it as neighbors without special case handling.
During the preprocessing, we use a sample size of 128 x
128 or 2
14
, which allows us to store the computed
priorities as integers ranging from [0, 2
14
] for efficient
computation. Therefore, we modify the equation (1) to
P = P
c
+ P
n
– (P
c
x P
n
>> 14) (3)
As stated in Section 4.5, we used a random number
generator for condition (i). Since this has to be done for
each cell, which results in very heavy floating-point
computations. So, we did not use the standard random
number generation function (e.g. rand()), instead we used
a trick method. We scaled (right shift 7 bit) the cell’s
priority to from 0 to 127, and used it as index to a pre-
computed Boolean array of size 128. The number of
“true” value in the Boolean array is equal to the index,
and is randomly distributed. A counter of the Boolean
array is used to keep tracking of the position of the last
Boolean return. In this way we avoid the time consumed
floating-point operations, minimizing the overhead in our
priority computations and visibility selection algorithm.
To test frustum cube intersection, the eye cell is located
and the cells in front of it are checked for frustum
intersection in slab order. A bounding box of standard
cell is stored in display list as visibility query geometry.
Instead of using programmable vertex shader [1], we
directly transform the bounding box using glTranslate()
function and render it as display list. This strategy
reduces the communication bandwidth between the host
to graphics hardware, as the display list is usually stored
and optimized in graphics card memory. Note that it is
necessary to turn off back face culling for correct result.
Otherwise for those cells that have front faces outside
view frustum but back faces in, e.g. eye cell, we may skip
rendering it accidentally. Figure 4 shows a screenshot of
our implemented system.
Besides the rendering results, our system also outputs all
the important information for each frame for performance
evaluation, such as the number of cells in frustum N
f
; the
number of used visibility queries N
q;
and the number of
cells determined to be invisible from visibility query N
v
,
which should meet following condition:
N
x
x N
y
x N
z
>= N
f
>= N
q
>= N
v
(4)
One parameter we are particularly interested is r = N
v
/ N
q
,
which measures the effectiveness of hardware visibility
query. Our objective is to increase r without
compromising the frame rate of rendering.
6. Experimental Results
The results reported here are based on a system
implemented on a 2.2 GHz Althlon XP with nVidia
GeForce FX 5200 graphics card with 128M VRAM and
AGP 4x interface. The parameters used for experiments
are: triangle threshold = 1; fragment threshold = 0
(conservative occlusion culling); and maximum frame
penalty = 10, respectively. The viewpoint is animated
along a predefined path for 500 frames. Five different
models, as shown in Figure 5, which are classified as two
categories: dense (a, b, c) and sparse (d, e), are used for
performance evaluation. In Figure 5, the left column
shows the rendering results with the proposed approach;
Figure 4 - Screenshot of the proposed prioritized
visibility query system.
C
i
, T
i
C
j
, T
j
C
j+w
= 0
j - i frame
w frame
Figure 3 - Number of fragment change per frame is
equal to | C
j
- C
i
| / (T
j
– T
i
), so current visible fragment
C
j
is predicted to be invisible after C
j
x (T
j
– T
i
) / | C
j
-
C
i
| frame.
the middle column plots the runtime performance
(1/average time to run each frame against to time)
comparisons for three approaches: (i) the proposed
prioritized visibility query, (ii) Simple visibility query,
and (iii) View frustum culling. The right column of
Figure5 plots r = N
v
/ N
q
against time to indicate how
effective we successfully make use of the hardware
visibility query.
Model (a) is an entire USC campus model collected by a
LiDAR sensor, containing about 4.3M triangles. The
result indicates that about 90% of time (r value) of our
approach can successfully make use of the hardware
visibility query, versus ~55% of time for without
prioritizing the cell. In this experiment, the overhead of
the priority computation is less than those of unfruitful
visibility queries, which clearly indicate the superior of
our approach.
Similar result was also obtained for model (b), having
moderate complexity (~1M triangles), with r = ~95% for
our approach vs. r = ~70% for other twos.
Model (c) is part of UNC Power Plant model (Section- 1)
which has ~3.4M triangles. We navigated the viewpoint
along a path going through inside and outside of the
model. The experimental result shows that although the r
value of our algorithm is higher, some cells with high
complexity are visible. So the rendering time dominates,
and we can think the overall performance of all the three
algorithms is very similar.
The structures of model (d) and (e) are quite sparse, so the
visibility query success rate should be lower. We used
them to test the overhead of visibility query computations.
Model (d) is the Section-16 of UNC Power Plant model,
having ~366K triangles. We first navigated the viewpoint
from outside of the scene from which we could visualize
the full model, and then navigated into the scene. We
found interesting results: once the viewpoint moved to
inside the model, the speed of the simple visibility
algorithm was about 20% slower than that of the view
frustum algorithm! This clearly indicates the heavy
overhead of hardware visibility query. Our prioritized
visibility algorithm, on the other hand, uses the visibility
query selectively, so its overhead is much lower. Similar
conclusion was also obtained from the result of model (e).
Video clips for above experimental results can be found at:
ftp://graphics.usc.edu/pub/pg"
7. Conclusions and Future Work
In this paper, we present a new occlusion culling
approach based on prioritized visibility query. We
demonstrated that the addition of query priority to a
simple front-to-back slab visibility processing could
greatly reduce the number of unfruitful visibility query in
occlusion culling computations. We also demonstrated
how to make use of frame coherence in both cases of
visibility query pass/fail to enhance the effectiveness of
visibility culling technique. Combining with a typical
occlusion culling method, we have applied our approach
to several models with different structure complexes and
the results have shown the effectiveness of the approach
in minimizing the amount of non-essential visibility
queries, hence improve overall performance of rendering
system. As the hardware visibility query has been
becoming a commodity in current graphics hardware, we
believe that the proposed approach could be a technique
to maximize the using of this hardware benefit.
This paper reports our efforts on reducing the number of
visibility queries wasted on visible cells. Future work
will be to reduce the number of visibility queries for
invisible cells. One observation is that the invisible cells
are usually clustered together. When the visibility query
determines that certain cell cluster is invisible in a frame,
it is likely that the same cluster is also invisible in the next
frame. Thus we may be able to combine them together to
achieve higher query success rate.
Another issue we like to further investigate is to reduce
the overhead of frustum/cell intersection. The amount of
time spent in the intersection computation is quite
noticeable for the large number of cell subdivisions. One
possibility to reduce the overhead is using the so called
cache separating plane [21] or the 3-Dimensional Digital
Differential Analyzer (3DDDA) to reduce frustum/cell
intersection tests.
Acknowledgement
This work is supported by the National Geospatial
Intelligence Agency (NGA) under a NGA University
Research Initiative (NURI) program. We thank the
Integrated Media Systems Center, a National Science
Foundation Engineering Research Center, for their
support and facilities. Our thanks also go to HP, Intel,
and Microsoft for equipment donations.
References
[1] K. Hillesl, B. Salomon, A Lastra and D. Manocha,
Fast and simple occlusion culling using hardware-
based depth queries, UNC-CH Technical Report
TR02-039, 2002
[2] N. Greene, M. Kass, and G. Miller, Hierarchical z-
buffer visibility, Proc. of ACM SIGGRAPH, 1993
[3] H. Zhang, D. Manocha, T. Hudson, and K. Hoff,
Visibility culling using hierarchical occlusion maps,
Proc of ACM SIGGRAPH 1997
[4] J. Bittner, V. Havran and P. Slavik, Hierarchical
visibility culling with occlusion trees, Proc. of
Computer Graphics International 1998
[5] S. J. Teller and C. H. Sequin, Visibility preprocessing
for interactive walkthroughs, Proc. of ACM
SIGGRAPH 1991
[6] Durand, Fredo, George Drettakis, Joelle Thollot and
Claude Puech, Conservative Visibility Preprocessing
using Extended Projections, Proc. of ACM
SIGGRAPH 2000
[7] G. Schaufler, J. Dorsey, X. Dd F. Sillion,
Conservative volumetric visibility with occluder fusion,
Proc. of ACM SIGGRAPH 2000
[8] T. Leyvand, O. Sorkine, D. Cohen-Or, Ray Space
Factorization for From-Region Visibility, Proc. of
ACM SIGGRAPH 2003
[9] D. Aliaga, J. Cohen, A. Wilson, H. Zhang, C. Erikson,
K. Hoff, T. Hudson, W. Stuerzlinger, E. Baker, R.
Bastos, M. Whitton, F. Brooks, and D. Manocha.
MMR: An integrated massive model rendering system
using geometric and image-based acceleration, Proc.
of ACM Symposium on Interactive 3D Graphics, 1999
[10] P. Wonka, W. Wimmer, and F. Sillion, Instant
visibility, Proc. of Eurographics, 2001
[11] B. Baxter, A. Sud, N. Govindaraju, and D. Manocha,
Gigawalk: Interactive walkthrough of complex 3D
environments, Proc. of Eurographics Workshop on
Rendering, 2002
[12] N. K. Govindaraju, A. Sud, S. E. Yoon, D. Manocha,
Interactive Visibility Culling in Complex
Environments using Occlusion-Switches, Proc. of
ACM Symposium on Interactive 3D Graphics, 2003
[13] D. Cohen-OR, Y. Chrysanthou, C. Silva and F.
Durand, A survey of visibility for walkthrough
applications, IEEE Transactions on Visualization and
Computer Graphics, Vol. 9, No. 3, 2003
[14] Mark Segal and Kurt Akeley, OpenGL 1.5
Specification
http://www.opengl.org/documentation/specs/version1
.5/glspec15.pdf
[15] D. Bartz, M. Meibner and T. Huttner, OpenGL
assisted occlusion culling for large polygonal models,
Computer and Graphics 23(3), 1999
[16] J. Klosowski and C. Silva, The Prioritized-Layered
Projection algorithm for visible set estimation, IEEE
Transaction on Visualization and Computer Graphics
6(2), 2000
[17] J. Klosowski and C. Silva, Efficient conservative
visibility culling using the prioritized-layered
projection algorithm, IEEE Transaction on
Visualization and Computer Graphics 7(4), 2001
[18] D. Barz, J. T. Klosowski, D. Staneker, k-DOPs as
Tighter Bounding Volumes for Better Occlusion
Performance, Proc. of ACM SIGGRAPH,
Conference Abstracts and Applications, 2001
[19] Don Hatch, Fast Polygon-Cube Intersection Testing,
Graphics Gems V
[20] Kenny Hoff, Fast AABB/View-Frustum Overlap Test,
http://www.cs.unc.edu/~hoff/research/vfculler/boxvfc
/boxvfc.html
[21] U. Assarsson and T. Moller, Optimized View
Frustum Culling Algorithms, Technical Report 99-3,
Department of Computer Engineering, Chalmers
University of Technology, 1999.
[22] S. E. Yoon, B. Salomon, and D. Manocha,
Interactive View-dependent Rendering with
Conservative Occlusion Culling in Complex
Environments, IEEE Visualization, 2003.
[23] V. Koltun and D. Colhen-Or, Selecting Effective
Occluders for Visibility Culling, Eurographics 2000.
[24] J. Shade, D. Lischinski, D. H. Salesin, T. DeRose, J.
Snyder, Hierarchical image caching for accelerated
walkthroughs of complex environments, Proc. of
ACM SIGGRAPH 2000.
0
10
20
30
40
50
60
fps
Prioritized Visibility Query
Simple Visibility Query
View Frustum Only
0
0.2
0.4
0.6
0.8
1
Visibility Query Success Rate
Figure 5a - Experiment results for USC campus LiDAR model, cell size 25x2x20, 4.3M triangles. For all charts (a)-(e), horizontal axis
represents frame number from 1 to 500, maximum frame penalty is 10, and same legend is used.
0
10
20
30
40
50
60
70
80
fps
0
0.2
0.4
0.6
0.8
1
Visibility Query Success Rate
Figure 5b - Experiment results for Dragon model, cell size 25x24x11, 971K triangles
0
10
20
30
40
50
fps
0
0.2
0.4
0.6
0.8
1
Visibility Query Success Rate
Figure 5c - Experiment results for UNC Power Plant model (section-1), cell size 15x16x25, 3.4M triangles.
0
20
40
60
80
100
120
fps
0
0.2
0.4
0.6
0.8
1
Visibility Query Success Rate
Figure 5d - Experiment results for UNC Power Plant model (section -16), cell size 13x25x18, 366K triangles.
0
50
100
150
200
250
300
350
fps
0
0.2
0.4
0.6
0.8
1
Visibility Query Success Rate
Figure 5e - Experiment results for USC campus refined model, cell size 17x25x2, 39K triangles.
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 767 (2002)
PDF
USC Computer Science Technical Reports, no. 794 (2003)
PDF
USC Computer Science Technical Reports, no. 861 (2005)
PDF
USC Computer Science Technical Reports, no. 922 (2011)
PDF
USC Computer Science Technical Reports, no. 761 (2002)
PDF
USC Computer Science Technical Reports, no. 940 (2014)
PDF
USC Computer Science Technical Reports, no. 705 (1999)
PDF
USC Computer Science Technical Reports, no. 720 (2000)
PDF
USC Computer Science Technical Reports, no. 668 (1998)
PDF
USC Computer Science Technical Reports, no. 826 (2004)
PDF
USC Computer Science Technical Reports, no. 839 (2004)
PDF
USC Computer Science Technical Reports, no. 691 (1999)
PDF
USC Computer Science Technical Reports, no. 833 (2004)
PDF
USC Computer Science Technical Reports, no. 893 (2007)
PDF
USC Computer Science Technical Reports, no. 840 (2005)
PDF
USC Computer Science Technical Reports, no. 891 (2007)
PDF
USC Computer Science Technical Reports, no. 795 (2003)
PDF
USC Computer Science Technical Reports, no. 899 (2008)
PDF
USC Computer Science Technical Reports, no. 909 (2009)
PDF
USC Computer Science Technical Reports, no. 967 (2016)
Description
Kelvin Chung, Suya You, Ulrich Neumann. "Occlusion culling using prioritized visibilit queries." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 829 (2004).
Asset Metadata
Creator
Chung, Kelvin
(author),
Neumann, Ulrich
(author),
You, Suya
(author)
Core Title
USC Computer Science Technical Reports, no. 829 (2004)
Alternative Title
Occlusion culling using prioritized visibilit queries (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
8 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16270541
Identifier
04-829 Occlusion Culling Using Prioritized Visibility Queries (filename)
Legacy Identifier
usc-cstr-04-829
Format
8 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/