Computer Graphics, Semester 3, Model examination, March 2021

 

 

SAINTGITS COLLEGE OF APPLIED SCIENCES

 

 

          PATHAMUTTOM, KOTTAYAM

 

MODEL EXAMINATION, MARCH 2020

Department of Computer Applications, Semester III

COMPUTER GRAPHICS

Answer Key

 

Total : 80 marks                                                         Time: 3 hours

Section A

Answer any 6 questions. Each question carries 3 marks.

 

1. Picture definition is stored in a memory area called the refresh buffer or frame buffer.  This memory area holds the set of intensity values for all the screen points. Stored intensity values are then retrieved from the refresh buffer and "painted" on the screen one row (scan line) at a time . Each screen point is referred to as a pixel .

2. Typefaces (or fonts) can be divided into two broad groups: Serif type and sans serif. Serif type has small lines or accents at the ends of the main character strokes, while sans-serif type does not have accents.

3. Another kind of constraint is a grid of rectangular lines displayed in some part of the screen area. When a grid is used, any input coordinate position is rounded to the nearest intersection of two grid lines.

4. Clipping determines each element into the visible and invisible portion. Visible portion is selected. An invisible portion is discarded.

Types of Clipping:

1.   Point Clipping

2.   Line Clipping

3.   Area Clipping (Polygon)

4.   Curve Clipping

5.   Text Clipping

5. A  simple method for translation in the xy plane is to transfer a rectangular block of pixel values from one location to another. Sequences of raster operations can be executed to produce real-time animation of either two-dimensional or three-dimensional objects, as long as we restrict the animation to motions in the projection plane.

6. Emissive displays are devices that convert electrical energy into light; where the image is produced directly on the screen. Non-emissive displays use optical effects to convert sunlight into graphical patterns and shapes; where the light is produced behind the screen and the image is formed by filtering this light

7. Pencil-shaped devices are used to select screen positions by detaching the light coming from points on the CRT screen. They are sensitive to the short burst of light emitted from the phosphor coating at the instant the electron beam strikes a particular point.

8. A keyframe is a detailed drawing of the scene at a certain time in the animation sequence. Within each key frame, each object is positioned according to the time for that frame.

9. The most straightforward method for defining a motion sequence is direct specification   of the motion parameters. Here, we explicitly give the rotation angles and translation vectors. Then the geometric transformation matrices are applied to transform coordinate positions.

10. Bitmap fonts require more space, because each variation (size and format) must be stored in a font cache. It is possible to generate different sizes and other variations, such as bold and italic, from one set, but this usually does not produce good results.

11. A world-coordinate area selected for display is called a window. An area on a display device to which a window is mapped is called a viewport. The window defines what is to be viewed; the viewport defines where it is to be displayed.

12. Solid-modeling packages often provide a number of construction techniques. Sweep representations are useful for constructing three-dimensional objects that possess translational, rotational, or other symmetries.                                                                                                                                              (10 x 2 = 20 Marks)

Section B

Answer any 4 questions. Each question carries 8 marks.

13. For a given radius r and screen center position ( x , y,), we can first set up our algorithm to calculate pixel positions around a circle path centered at the coordinate origin (0,O),

     Then each calculated position (x, y) is moved to its proper screen position by adding x,

      to x and y, toy. Along the circle section from x = 0 to x = y in the first quadrant, the slope of the curve varies from 0 to -1. Therefore, we can take unit steps in the positive x direction over this octant and use a decision parameter to determine which of the two possible y positions is closer to the circle path at each step. Positions in the other seven octants are then obtained by symmetry.

 

      Given a circle radius r = 10, we demonstrate the midpoint circle algorithm by determining positions along the circle octant in the first quadrant hum x = 0 to x = y. The initial value of the decision parameter is For the circle centered on the coordinate origin, the initial point is (x,, yo) - (0, lo), and initial increment terms for calculating the division parameters are

      Successive decision parameter values and positions along the circle path are calculated

     using the midpoint method as



14. Transformation of object shapes from one form to another is called morphing, which is a shortened form of metamorphosis. Morphing methods can he applied to any motion or transition involving a change in shape. Given two key frames for an object transformation, we first adjust the object specification in one of the frames so that the number of polygon edges (or the number of vertices) is the same for the two frames. A straight-line segment in key frame k 15 transformed into two line segments in key frame k t 1. Using linear interpolation to generate the in-betweens.



15. Another technique for solid modeling is to combine the volumes occupied by overlapping three-dimensional objects using set operations. This modelling method, called constructive solid geometry (CSG) creates a new volume by applying the union, intersection, or difference operation to two specified volumes. A CSG application-starts with an k t i a l set of three-dimensional objects. Ray-casting methods are commonly used to implement constructive solid geometry operations when objects are described with boundary representations.

 

16. To generate a rotation transformation for an object, we must designate an axis of

      rotation(about which the object is to be rotated) and the amount of angular rotation.

      Unlike two-dimensional applications, where all transformations are carried out in the xy plane, a three-dimensional rotation can be specified around any line in space. The easiest rotation axes to handle are those that are parallel to the coordinate axes.



17. There are two basic projection methods. In a parallel projection, coordinate positions are transformed to the vied plane along parallel lines. For a perspective projection, object positions are transformed to the view plane along lines that converge to a point called the projection reference point (or center of projection). The projected view of an  object is determined calculating the intersection of the projection lines with the view  plane.




18. Some graphics packages (for example, PHlCS) provide several polygon functions for modeling object F . A single plane surface can be specified with a function such as fillArea. But when object surfaces are to be tiled, it is more convenient to specify the surface facets with a mesh function. One type of polygon mesh is the triangle strip.



19. Story board layout

      The Story board is an outline of the action. It defines the motion sequence as a set of basic events that are to take place. Depending on the type of animation to be produced, the storyboard could consist of a set of rough sketches or it could be a list of the basic ideas for the motion.

Keyframe specifications

      A keyframe is a detailed drawing of the scene at a certain time in the animation sequence. Within each key frame, each object is positioned according to the time for that frame. Some key frames are chosen at extreme positions in the action; others are spaced so that the time interval between key frames is not two great. More key frames are specified for intricate motions than for simple, slowly varing motions.

Object definitions

      An object definition is given for each partidpant in the action. Objects can be defined in terms of basic shapes, such as polygons or splines. In addition, the associated movements for each object are speded along with the shape.

 

20.A translation is applied to an object by repositioning it along a straight-line path from one coordinate location to another. We translate a two-dimensional point by adding translation distances, f, and t,, to the original coordinate position (x, y) to move the point to a new position ( x ' , y') .

x' = x + t,, y' = y + t,




 

21. Data Glove

      A data glove that can be used to grasp a "virtual" object. The glove is constructed with a series of sensors that detect hand and finger motions. Electromagnetic coupling between transmitting antennas and receiving antennas is used to provide information about the position and orientation of the hand. The transmitting and receiving antennas can each be structured as a set of three mutually perpendicular coils, forming a three-dimensional Cartesian coordinate system. Input from the glove can be used to position or manipulate objects in a virtual scene.

Digitizer

      A position on an object is a digitizer. These devices can be used to input coordinate values in either a two-dimensional or a three-dimensional space. Typically, a digitizer is used to scan over a drawing or object and to input a set of discrete coordinate positions, which can be joined with straight-Iine segments to approximate the curve or surface shapes.

Touch Panel

      Touch panels allow displayed objects or screen positions to be selected with the touch of a finger. A typical application of touch panels is for the selection of processing options that are repmented with graphical icons.Other systems can be adapted for touch input by fitting a transparent device with a touchsensing mechanism over the video monitor screen. Touch input can be recorded using optical, electrical, or acoustical methods. Optical touch panels employ a line of infrared light-emitting diodes (LEDs) along one vertical edge and along one horizontal edge of the frame. The opposite vertical and horizontal edges contain light detectors. These detectors are used to record which beams are interrupted when the panel is touched.

Light pen

        Pencil-shaped devices are used to select screen positions by detaching the light coming from points on the CRT screen. They are sensitive to the short burst of light emitted from the phosphor coating at the instant the electron beam strikes a particular point. Other Light sources, such as the background light in the room, are usually not detected by a light pen. An activated light pen, pointed at a spot on the screen as the electron beam lights up that spot, generates an electrical pulse that causes the coordinate position of the electron beam to be recorded. As with cursor-positioning devices, recorded Light-pen coordinates can be used to position an object or to select a processing option.

(6 x 5 = 30 Marks)

 

 

Section C

Answer any 3 questions. Maximum of 30 marks.

22.



Explain its working

23. A polygon boundary processed with a line clipper may be displayed as a series of unconnected line segments , depending on the orientation of the polygon to the cIipping window. For polygon clipping, we require an algorithm that wiIl generate one or more closed areas that are then scan converted for the appropriate area fill. The output of a polygon clipper should be a sequence of vertices that defines the clipped polygon boundaries.




 

24.A keyframe is a detailed drawing of the scene at a certain time in the animation sequence. Within each key frame, each object is positioned according to the time for that frame. Some key frames are chosen at extreme positions in the action; others are spaced so that the time interval between key frames is not two great. More key frames are specified for intricate motions than for simple, slowly varing motions.

Explain with an example in morphing.

25. An accurate and efficient raster line-generating algorithm, developed by Bresenham,

scan converts lines using only incrementa1 integer calculations that can be adapted to display circles and other curves.The vertical axes show-scan-line positions, and the horizontal axes identify pixel columns.Sampling at unit x intervals in these examples, we need to decide which of two possible pixel positions is closer to the line path at each sample step.Write Bresenham’s line drawing algorithm with example.

                                                                                                         (2 x 15 = 30 Marks)

Comments

Popular posts from this blog

UG, S1 BCA, First internal examination, Introduction to Problem Solving and Web Designing, September 2024