DBMS, Semester 2(2021 Admissions), Model Examination Sept 2022
SAINTGITS COLLEGE OF APPLIED SCIENCES
PATHAMUTTOM, KOTTAYAM
Model Examination, SEPT 2022
PG Department of Computer Applications and AI Semester 2
Database Management Systems
Total : 80 marks Time: 3
hours
Section A
Answer any 6 questions. Each question carries 3 marks.
(Write in not less than a paragraph)
1. Define data, database and DBMS.
Data is processed information.
Database-Collection of inter related data.
DBMS-System to manage database
2. List out the end users of DBMS?
The different types of end users are as follows −
Casual end users
Parametric end users
Sophisticated end users
Standalone users
3. Differentiate entity and attribute.
A single unique object in the real world
A characteristic or trait of an entity type ..
4. Compare Primary Key and Foreign Key.
The primary key is a unique or non-null key that uniquely identifies every record in a table
or relation. Each database needs a unique identifier for every row of a table, and the primary
key plays a vital role in identifying rows in the table uniquely. The primary key column can't
store duplicate values.
The foreign key is a group of one or more columns in a database to uniquely identify another
database record in some other table to maintain the referential integrity. It is also known as
the referencing key that establishes a relationship between two different tables in a
database. A foreign key always matches the primary key column in another table. It means a
foreign key column in one table refers to the primary key column of another table.
5. Explain weak entity.
L E A R N . G R O W . E X C E L
A weak entity cannot be used independently as it is dependent on a strong
entity type known as its owner entity. Also, the relationship that connects the
weak entity to its owner identity is called the identifying relationship.
A weak entity always has a total participation constraint with respect to its
identifying relationship because it cannot be identified independently of its
owner identity.
A weak entity may have a partial key, which is a list of attributes that identify
weak entities related to the same owner entity.
6. What is Alter command?
The SQL ALTER TABLE command is used to add, delete or modify columns in an
existing table. You should also use the ALTER TABLE command to add and drop
various constraints on an existing table.
7. Define DDL.
Data definition language (DDL) is a language that allows the user to define the
data and their relationship to other types of data.
Data Definition language statements work with the structure of the database
table.
Various data types used in defining columns in a database table
Integrity and value constraints
Viewing, modifying and removing a table structure
8. Briefly explain the set operations in SQL?
9. Set operators are used to join the results of two (or more) SELECT
statements. The SET operators available are UNION,UNION
ALL,INTERSECT,and MINUS.
The UNION set operator returns the combined results of the two SELECT
statements. Essentially,it removes duplicates from the results i.e. only one
row will be listed for each duplicated result.To counter this behavior,use the
UNION ALL set operator which retains the duplicates in the final result
.INTERSECT lists only records that are common to both the SELECT queries;
the MINUS set operator removes the second query's results from the output if
they are also found in the first query's results. INTERSECT and MINUS set
operations produce unduplicated results
9. What is the need of normalization in database?
Normalization is a technique for organizing data in a database. It is important that a
database is normalized to minimize redundancy (duplicate data) and to ensure only
related data is stored in each table. It also prevents any issues stemming from database
modifications such as insertions, deletions, and updates
10. Define transaction processing .
A transaction is a program including a collection of database operations,
executed as a logical unit of data processing. The operations performed in a
transaction include one or more of database operations like insert, delete,
update or retrieve data. It is an atomic process that is either performed into
completion entirely or is not performed at all. A transaction involving only data
retrieval without any data update is called read-only transaction.
L E A R N . G R O W . E X C E L
11. State dirty read problem?
Dirty read is a read of uncommitted data. If a particular row is modified by another
running application and not yet committed, we also run an application to read the
same row with the same uncommitted data. This is the state we say it as a dirty read.
12. What you mean by granting of privileges?
Privilege means different Data Manipulation Language(DML) operations which can
be performed by the user on data like INSERT, UPDATE, SELECT and DELETE,
etc. Database privileges —
A privilege is permission to execute one particular type of SQL statement or access
a second persons’ object. Database privilege controls the use of computing
resources. Database privilege does not apply to the Database administrator of the
database.
(6 x 3 = 18 Marks)
Section B
Answer any 4 questions. Each question carries 8 marks.
(Write in not less than 2 pages)
13. Explain different data models.
Physical level: This is the lowest level of data abstraction. It describes
how data is actually stored in database. You can get the complex data
structure details at this level.
Logical level: This is the middle level of 3-level data abstraction
architecture. It describes what data is stored in database.
View level: Highest level of data abstraction. This level describes the
user interaction with database system.
14. Write about the characteristics of relation.
Ordering of tuples in a relation r(R):
Ordering of attributes in a relation schema R (and of values within each tuple):
Values in a tuple: All values are considered atomic (indivisible). Each value in a
tuple must be from the domain of the attribute for that column
15. Discuss the concept of entity relationship model.
ER model stands for an Entity-Relationship model. It is a high-level data
model. This model is used to define the data elements and relationship for a
specified system.
It develops a conceptual design for the database. It also develops a very
simpleand easy to design view of data.
In ER modeling, the database structure is portrayed as a diagram called an
entity-relationship diagram.
L E A R N . G R O W . E X C E L
16. Explain cardinality and participation in detail.
In a Relationship, Participation constraint specifies the existence of an entity
when it is related to another entity in a relationship type. It is also called
minimum cardinality constraint.
This constraint specifies the number of instances of an entity that can
participate in a relationship type.
There are two types of Participation constraint
In database management, cardinality plays an important role. Here
cardinality represents the number of times an entity of an entity set
participates in a relationship set. Or we can say that the cardinality of a
relationship is the number of tuples (rows) in a relationship
17. Explain Aggregate functions in detail.
n database management an aggregate function is a function where the values
of multiple rows are grouped together as input on certain criteria to form a
single value of more significant meaning.
Various Aggregate Functions
1) Count()
2) Sum()
3) Avg()
4) Min()
5) Max()
18. Explain group by and having clause.
Having Clause is basically like the aggregate function with the GROUP BY
clause. The HAVING clause is used instead of WHERE with aggregate
functions. While the GROUP BY Clause groups rows that have the same
values into summary rows. The having clause is used with the where clause
in order to find rows with certain conditions. The having clause is always
used after the group By clause.
18. Explain about database security.
To protect databases against these types of
threats four kinds of countermeasures can be
implemented:
Access control
Inference control
Flow control
Encryption
L E A R N . G R O W . E X C E L
19. Explain the following a) 1 NF b) 2 NF c) 3 NF
Normalization
Here are the most commonly used normal forms:
First normal form(1NF)
Second normal form(2NF)
Third normal form(3NF)
Boyce & Codd normal form (BCNF)
First normal form (1NF)
A relation is said to be in 1NF (first normal form), if it doesn’t contain any multi-
valued attribute. In other words you can say that a relation is in 1NF if each attribute
contains only atomic(single) value only.
Second normal form (2NF)
A table is said to be in 2NF if both the following conditions hold:
Table is in 1NF (First normal form)
No non-prime attribute is dependent on the proper subset of any candidate
key of table.
Third Normal form (3NF)
A table design is said to be in 3NF if both the following conditions hold:
Table must be in 2NF
Transitive functional dependency of non-prime attribute on any super key
should be removed.
An attribute that is not part of any candidate key is known as non-prime attribute.
20. Define the desirable properties of transaction.
L E A R N . G R O W . E X C E L
(4 x 8 = 32 Marks)
Section C
Answer up to 3 questions carrying 15 marks each. However, total marks for this
section should not exceed 30 marks. Marks scored over 30 will be ignored
21. Describe the characteristics of DBMS.
The main characteristics of the database approach versus the file-processing approach are
the following: ■ Self-describing nature of a database system ■ Insulation between programs
and data, and data abstraction ■ Support of multiple views of the data ■ Sharing of data and
multiuser transaction processing
Explain each one in detail
23, Write about ER model and relational model.
R Model vs. Relational Model Comparison Chart
The following comparison chart explains their main differences in a quick manner:
Comparison
Basis
ER Model Relational Model
Basic It's used to describe a set of objects It's used to represent a collection of
L E A R N . G R O W . E X C E L
known as entities, as well as the
relationships between them.
tables as well as the relationships
between them.
Type It is a high-level or conceptual model. It is the implementation or
representational model.
Components It represents components as Entity, Entity
Type, and Entity Set.
It represents components as domain,
attributes, and tuples.
Used By This model is helpful for those people who
don't have any knowledge about how data
is implemented.
This model is mostly famous among
programmers.
Relationship It is easy to understand the relationship
between entities.
Compared to ER model, it is easier to
derive the relation between tables in
the Relational model.
Mapping This model explains how to map
Cardinalities. The uniqueness of data
values in a row is referred to as
cardinality.
This model does not describe mapping
cardinalities.
24. Explain about indexing structures for files
INDEXING STRUCTURE FOR FILES
o The index is a type of data structure. It is used to locate and access the data
in a database table quickly.
Index structure:
Indexes can be created using some database columns.
o The first column of the database is the search key that contains a copy of the
primary key or candidate key of the table. The values of the primary key are
stored in sorted order so that the corresponding data can be accessed easily.
o The second column of the database is the data reference. It contains a set of
pointers holding the address of the disk block where the value of the particular
key can be found.
SINGLE LEVEL INDEX
L E A R N . G R O W . E X C E L
A single-level index is an auxiliary file that makes it more efficient to search for a
record in the data file.
The index is usually specified on one field of the file (although it could be specified
on several fields)
One form of an index is a file of entries <field value, pointer to record>, which is
ordered by field value
The index is called an access path on the field.
The index file usually occupies considerably less disk blocks than the data file
because its entries are much smaller
A binary search on the index yields a pointer to the file record
Indexes can also be characterized as dense or sparse
A dense index has an index entry for every search key value (and
hence every record) in the data file.
A sparse (or nondense) index, on the other hand, has index entries
for only some of the search values
Types of Single-level Ordered Indexes
Primary Indexes
Clustering Indexes
Secondary Indexes
Primary Index
Defined on an ordered data file
If the index is created on the basis of the primary key of the table, then it is
known as primary indexing. These primary keys are unique to each record
and contain 1:1 relation between the records. As primary keys are stored in
sorted order, the performance of the searching operation is quite efficient.
The data file is ordered on a key field
Includes one index entry for each block in the data file; the index entry
has the key field value for the first record in the block, which is called
the block anchor
A similar scheme can use the last record in a block.
A primary index is a nonsense (sparse) index, since it includes an entry for each disk
block of the data file and the keys of its anchor record rather than for every search
value
L E A R N . G R O W . E X C E L
Clustering Index
Defined on an ordered data file
The data file is ordered on a non-key field unlike primary index, which
requires that the ordering field of the data file have a distinct value for
each record.
Includes one index entry for each distinct value of the field; the index
entry points to the first data block that contains records with that field
value.
It is another example of nondense index where Insertion and Deletion
is relatively straightforward with a clustering index.
o In this case, to identify the record faster, we will group two or more columns to
get the unique value and create index out of them. This method is called a
clustering index.
o The records which have similar characteristics are grouped, and indexes are
created for these group.
Secondary Index
A secondary index provides a secondary means of accessing a file for
which some primary access already exists.
The secondary index may be on a field which is a candidate key and
has a unique value in every record, or a non-key with duplicate values.
The index is an ordered file with two fields.
The first field is of the same data type as some non-ordering
field of the data file that is an indexing field.
The second field is either a block pointer or a record pointer.
There can be many secondary indexes (and hence, indexing
fields) for the same file.
Includes one entry for each record in the data file; hence, it is a dense
index
In the sparse indexing, as the size of the table grows, the size of mapping
also grows. These mappings are usually kept in the primary memory so that
address fetch should be faster. Then the secondary memory searches the
L E A R N . G R O W . E X C E L
actual data based on the address got from mapping. If the mapping size
grows then fetching the address itself becomes slower. In this case, the
sparse index will not be efficient. To overcome this problem, secondary
indexing is introduced.
In secondary indexing, to reduce the size of mapping, another level of
indexing is introduced. In this method, the huge range for the columns is
selected initially so that the mapping size of the first level becomes small.
Then each range is further divided into smaller ranges. The mapping of the
first level is stored in the primary memory, so that address fetch is faster. The
mapping of the second level and actual data are stored in the secondary
memory (hard disk).
25. With examples explain DDL, DML and TCL commands.
(Maximum 30 Marks)
Comments
Post a Comment