Data Structures and Algorithms 2075
Long Questions:
Attempt any TWO questions:
1. How can you use stack to convert an infix expression to postfix? Convert infix expression (A + B)*(C - D) to postfix using stack.
Algorithm to convert infix to postfix expression using stack is given below:
1. Scan the infix string from left to right.
2. Initialize an empty stack.
3. If the scanned character is an operand, add it to postfix sting
4. If the scanned character is an operator, PUSH the character to stack.
5. If the top operator in stack has equal or higher precedence than scanned operator then POP the operator present in stack and add it to postfix string else PUSH the scanned character to stack.
6. If the scanned character is a left parenthesis ‘(‘, PUSH it to stack.
7. If the scanned character is a right parenthesis ‘)’, POP and add to postfix string from stack until an ‘(‘ is encountered. Ignore both '(' and ')'.
8. Repeat step 3-7 till all the characters are scanned.
9. After all character are scanned POP the characters and add to postfix string from the stack until it is not empty.
Given,
Infix expression = (A + B)*(C - D)
Converting to postfix:
Characters |
stack |
Postfix expression |
( |
( |
|
A |
( |
A |
+ |
(+ |
A |
B |
(+ |
AB |
) |
|
AB+ |
* |
* |
AB+ |
( |
*( |
AB+ |
C |
*( |
AB+C |
- |
*(- |
AB+C |
D |
*(- |
AB+CD |
) |
* |
AB+CD- |
|
|
AB+CD-* |
Postfix expression of given infix is A B + C D - *
2. Explain concept of divide and conquer algorithm. Hand test quick algorithm with array of numbers (78, 34, 21, 43, 7, 18, 9, 56, 38, 19). What is time complexity of quick sort algorithm?
Divide-and-conquer algorithm breaks a problem into subproblems that are similar to the original problem, recursively solves the subproblems, and finally combines the solutions to the subproblems to solve the original problem.
Divide and Conquer algorithm consists of the following three steps.
- Divide: Divide the original problem into a set of subproblems.
- Conquer: Solve every subproblem individually, recursively.
- Combine: Put together the solutions of the subproblems to get the solution to the whole problem.
Given array:
78, 34, 21, 43, 7, 18, 9, 56, 38, 19
Sorting using quick sort:
......
Algorithm for Quick sort
QuickSort(A, l, r)
{
if(l<r)
{
p = Partition(A,l,r);
QuickSort(A,l,p-1);
QuickSort(A,p+1,r);
}
}
Partition(A, l, r)
{
x =l; y =r ; p = A[l];
while(x<y)
{
do {
x++;
}while(A[x] <= p);
do {
y--;
} while(A[y] >=p);
if(x<y)
swap(A[x],A[y]);
}
A[l] = A[y];
A[y] = p;
return y; //return position of pivot
}
Time Complexity:
Best Case: T(n) = O(nlogn)
Worst Case: T(n) = O(n2)
Average Case: T(n) = O(nlogn)
3. Discuss depth first and breadth first traversal of a graph with suitable example.
Traversing a graph means visiting all the vertices in a graph exactly one. In graph all nodes are treated equally. So the starting nodes in graph can be chosen arbitrarily.
A graph traversal finds the edges to be used in the search process without creating loops that means using graph traversal we visit all vertices of graph without getting into looping path.
Breadth First Search Traversal (BFS)
The traversal starts at a node v, after marking the node the traversal visits all incident edges to node v after marking the nodes and then moving to an adjacent node and repeating the process. The traversal continues until all unmarked nodes in the graph has been visited. A queue is maintained in the technique to maintain the list of incident edges and marked nodes.
Algorithm:
BFS(G,s) /* s is
start vertex */
{
T = {s};
L =Φ; /* an empty
queue */
Enqueue(L,s);
while (L != Φ )
{
v = dequeue(L);
for each neighbor w
to v
if ( wÏ L and w Ï T )
{
enqueue( L,w);
T = T U {w}; /* put
edge {v,w} also */
}
}
}
Example:
Choose a as initial vertex then we have
Order the vertices of level 1 i.e. {b, c, g, e, f}. Say order be {e, f, g, b, c}.
Depth First Traversal (DFS)
This technique picks up a node and marks it. An unmarked adjacent node to previous node is then selected and marked, become the new start node, possibly leaving the previous node with unexplored edges for the present. The traversal continued recursively, until all unmarked nodes of the current path are visited. The process is continued for all the paths of the graph. If this cannot be done, move back to another vertex and repeat the process. The whole process is continued until all the vertices are met. This method of search is also called backtracking. A stack data structure is maintained in the technique to maintain the list of incident edges and marked nodes.
Algorithm:
DFS(G,s)
{
T = {s};
Traverse(s);
}
Traverse(v)
{
for each w adjacent to v and not yet in T
{
T = T ∪ {w}; //put edge {v, w} also
Traverse (w);
}
}
Example:
Choose a as initial vertex then we have
Short Questions:
Attempt any Eight questions:
4. What do you mean by complexity of algorithms? How do you find time complexity?
The complexity of an algorithm is a function f (n) which measures the time and space used by an algorithm in terms of input size n. The complexity of an algorithm is a way to classify how efficient an algorithm is, compared to alternative ones.
The process of finding complexity of given algorithm by counting number of steps and number of memory references used by using RAM model is called detailed analysis. For detailed time complexity algorithm we count the number of steps and finally add all steps and calculate their big oh notation. Similarly for detailed analysis of space complexity we count the number of memory references and finally add all steps and find their big oh notation.
E.g.
#include<stdio.h>
int main()
{
int i, n, fact=1;
printf("Enter a number to calculate factorial:");
scanf("%d",&n);
for(i=1; i<=n; i++)
fact = fact*i;
printf("Factorial of %d=%d", n, fact);
return 0;
}
Time Complexity:
The declaration statement takes 1 step.
Printf statement takes 1 step.
Scanf statement takes 1 step.
In for loop:
i=1 takes 1 step
i<=n takes (n+1) step
i++ takes n step
within for loop fact=fact*i takes n step
printf statement takes 1 step
return statement takes 1 step
Total time complexity = 1+1+1+1+n+1+n+n+1+1
= 3n+7
= O(n)
Space complexity:
total memory references used = 3 = O(1)
5. Compare stack with queue. How is linear queue different from circular queue?
Comparison between stack and queue
Difference between linear queue and circular queue
1. The linear queue is an ordered list in which data elements are organized in the sequential order. In contrast, circular queue stores the data in the circular fashion.
2. Linear queue follows the FIFO order for executing the task (the element added in the first position is going to be deleted in the first position). Conversely, in the circular queue, the order of operations performed on an element may change.
3. The insertion and deletion of the elements is fixed in linear queue i.e. addition from the rear end and deletion from the front end. On the other hand, the circular queue is capable of inserting and deleting the element from any point until it is unoccupied.
4. Linear queue wastes the memory space while circular queue makes the efficient use of space.
6. What is ADT? Discuss stack as an ADT.
Abstract data types are a set of data values and associated operations that are precisely independent of any particular implementation. Generally the associated operations are abstract methods. E.g. stack, queue etc.
Stack as an ADT
A stack of elements of type T is a finite sequence of elements of T together with the operations
- CreateEmptyStack(S): Create or make stack S be an empty stack
- Push(S, x): Insert x at one end of the stack, called its top
- Top(S): If stack S is not empty; then retrieve the element at its top
- Pop(S): If stack S is not empty; then delete the element at its top
- IsFull(S): Determine if S is full or not. Return true if S is full stack; return false otherwise
- IsEmpty(S): Determine if S is empty or not. Return true if S is an empty stack; return false otherwise.
Thus by using a stack we can perform above operations thus a stack acts as an ADT.
7. Define recursive algorithm? How do you implement recursive algorithms while writing computer programs?
A recursive algorithm is an algorithm which calls itself with "smaller (or simpler)" input values, and which obtains the result for the current input by applying simple operations to the returned value for the smaller (or simpler) input.
We can use following steps to implement recursive algorithm:
- Initialize the algorithm. Recursive programs often need a seed value to start with. This is accomplished either by using a parameter passed to the function or by providing a gateway function that is nonrecursive but that sets up the seed values for the recursive calculation.
- Check to see whether the current value(s) being processed match the base case. If so, process and return the value.
- Redefine the answer in terms of a smaller or simpler sub-problem or sub-problems.
- Run the algorithm on the sub-problem.
- Combine the results in the formulation of the answer.
- Return the results.
For e.g.
Program to find factorial of an integer:
#include<stdio.h>
#include<conio.h>
void main( )
{
clrscr( );
int factorial(int);
int n,f;
printf("Enter the number: ");
scanf("%d",&n);
f=factorial(n);
printf("Factorial of the %d number is %d",n,f);
getch();
}
int factorial(int n)
{
int f;
if(n==1)
return 1;
else
f=n*factorial(n-1);
return f;
}
8. What are the benefits of using linked list over array? How can you insert a node in a singly linked list?
Linked lists and arrays are similar since they both store collections of data. Array is the most common data structure used to store collections of elements. Arrays are convenient to declare and provide the easy syntax to access any element by its index number. Once the array is set up, access to any element is convenient and fast.
The benefits of using linked list over array are:
1. Linked lists are dynamic data structures. i.e., they can grow or shrink during the execution of a program.
2. Linked lists have efficient memory utilization. Here, memory is not preallocated. Memory is allocated whenever it is required and it is de-allocated (removed) when it is no longer needed.
3. Insertion and Deletions are easier and efficient. Linked lists provide flexibility in inserting a data item at a specified position and deletion of the data item from the given position.
4. Many complex applications can be easily carried out with linked lists.
5. There is no need to define an initial size for a Linked list.
6. Linked List reduces the access time.
Steps involved in inserting a node at the beginning of singly linked list:
let *head be the pointer to first node in the current list
1. Create a new node using malloc function
NewNode=(NodeType*)malloc(sizeof(NodeType));
2. Assign data to the info field of new node
NewNode->info=newItem;
3. Set next of new node to head
NewNode->next=head;
4. Set the head pointer to the new node
head=NewNode;
5. End
Steps involved in inserting a node at the end of the singly linked list:
let *head be the pointer to first node in the current list
1. Create a new node using malloc function
NewNode=(NodeType*)malloc(sizeof(NodeType));
2. Assign data to the info field of new node
NewNode->info=newItem;
3. Set next of new node to NULL
NewNode->next=NULL;
4. if (head ==NULL)then
Set head =NewNode.and exit.
5. Set temp=head;
6 while(temp->next!=NULL)
temp=temp->next; //increment temp
7. Set temp->next=NewNode;
8. End
9. How do you implement binary search algorithm? What is time complexity of this algorithm?
Binary search is an extremely efficient search technique that searches the given item in minimum possible comparisons in the already sorted list of given elements. The logic behind the technique is:
1. First find the middle element of the list
2. Compare the mid-element with searched item.
There are three cases:
a. If it is a desired element then search is successful.
b. If it is less than desired item then search only in the first half of the list.
c. If it is greater than desired item then search in the second half of the list.
Recursive Algorithm:
BinarySearch(A,l,r, key)
{
if(l= = r)
{
if(key = = A[l])
return l+1; //index starts from 0
else
return 0;
}
else
{
m = (l + r) /2 ;
if(key = = A[m])
return m+1;
else if (key < A[m])
return BinarySearch(l, m-1, key) ;
else
return BinarySearch(m+1, r, key) ;
}
}
Example:
Consider an array of data:
21, 36, 56, 79, 101, 123, 142, 203
Now,
Tracing binary search for above data;
Set L=0, R=7, key=123 (search for 123)
S.No. | L | R | Mid=(L+R)/2 | Remarks |
1. 2. | 0 4 | 7 7 | (0+7)/2=3 (4+7)/2=5 | Key>a[3] Key==a[5] |
Therefore, search successful and key(123) is at position (Mid+1)=5+1=6.
Efficiency:
From the above algorithm we can say that the running time of the algorithm is
T(n) = T(n/2) + O(1)
= O(logn)
In the best case output is obtained at one run i.e. O(1) time if the key is at middle.
In the worst case the output is at the end of the array so running time is O(logn) time. In the average case also running time is O(logn).
10. What is hashing? Discuss rehashing with example.
Hashing is a well-known technique to search any particular element among several elements. It minimizes the number of comparisons while performing the search.
In hashing, an array data structure called as Hash table is used to store the data items. Hash table is a data structure used for storing and retrieving data very quickly. Insertion of data in the hash table is based on the key value. Hence every entry in the hash table is associated with some key. Using the hash key the required piece of data can be searched in the hash table by few or more key comparisons.
Fig: Hashing Mechanism
Hash function is a function which is used to put the data in the hash table. Hence one can use the same hash function to retrieve the data from the hash table. Thus hash function is used to implement the hash table. The integer returned by the hash function is called hash key.
E.g. Division Hash Function
The division hash function depends upon the remainder of division. Typically the divisor is table length. For e.g. If the record 54, 72, 89, 37 is placed in the hash table and if the table size is 10 then the hash function is:
h(key) = record % table size
and key obtained by hash function is called hash key.
54%10=4
72%10=2
89%10=9
37%10=7
Rehashing
Rehashing is a collision resolution technique.
Rehashing is a technique in which the table is resized, i.e., the size of table is doubled by creating a new table. It is preferable is the total size of table is a prime number. There are situations in which the rehashing is required.
• When table is completely full
• With quadratic probing when the table is filled half.
• When insertions fail due to overflow.
In such situations, we have to transfer entries from old table to the new table by re computing their positions using hash functions.
Consider we have to insert the elements 37, 90, 55, 22, 17, 49, and 87. the table size is 10 and will use hash function:
H(key) = key mod tablesize
37 % 10 = 7
90 % 10= 0
55 % 10 = 5
22 % 10 = 2
17 % 10 = 7
49 % 10 = 9
Now this table is almost full and if we try to insert more elements collisions will occur and eventually further insertions will fail. Hence we will rehash by doubling the table size. The old table size is 10 then we should double this size for new table, that becomes 20. But 20 is not a prime number, we will prefer to make the table size as 23. And new hash function will be
H(key) key mod 23
37 % 23 = 14
90 % 23 = 21
55 % 23 = 9
22 % 23 = 22
17 % 23 = 17
49 % 23 = 3
87 % 23 = 18
Now the hash table is sufficiently large to accommodate new insertions.
11. How do you transverse a binary tree? Discuss.
The tree traversal is a way in which each node in the tree is visited exactly once in a symmetric manner.
The binary tree can be traverse in 3 ways:
1.
Pre-order traversal
2.
In-order traversal
3. Post-order traversal
Pre-order Traversal
In Pre-Order traversal, the root node is visited before left child and right child nodes. In this traversal, the root node is visited first, then its left child and later its right child. This pre-order traversal is applicable for every root node of all subtrees in the tree.
Algorithm:
Until all nodes are traversed:- Step 1: Visit root node Step 2: Recursively traverse left sub-tree. Step 3: Recursively traverse right sub-tree. |
.
E.g.
The preorder traversal output of the given above tree is: A B D H I E C F G
In-order Traversal:
In In-Order traversal, the root node is visited between left child and right child. In this traversal, the left child node is visited first, then the root node is visited and later we go for visiting right child node. This in-order traversal is applicable for every root node of all subtrees in the tree. This is performed recursively for all nodes in the tree
Algorithm:
Until all nodes are traversed:- Step 1: Recursively traverse left sub-tree. Step 2: Visit root node Step 3: Recursively traverse right sub-tree. |
E.g.
The inorder traversal output of the given tree is: H D I B E A F C G
Post-order Traversal
In Post-Order traversal, the root node is visited after left child and right child. In this traversal, left child node is visited first, then its right child and then its root node. This is recursively performed until the right most node is visited.
Algorithm:
Until all nodes are traversed:- Step 1: Recursively traverse left sub-tree. Step 2: Recursively traverse right sub-tree. Step 3: Visit root node |
E.g.
The post-order traversal output of the given above tree is: H I D E B F G C A
12. Write short notes on:
(a) Dynamic memory allocation
(b) Game tree