Algorithm Interview Questions and Answers for 2023

Information technology, software or technical jobs have a series of technical interviews throughout the hiring process. Most companies pay major attention to technical algorithm interviews. If you want to be a deserving candidate, you must have a strong hold on concepts related to algorithms. This article contains the most commonly asked algorithm interview questions and answers, a few data structures and algorithms interview questions. It ends with some Java and programming algorithm questions as well. The interview question in beginner, intermediate and advanced sections contain python algorithm interviews for various levels and programming algorithm questions. The questions and answers provided would make you confident in the concepts of trees, linked lists, data structures and algorithms, sorting and searching algorithms and would make your job ready for most of the positions listed along with the companies who hire for following or similar positions as listed on different job posting platforms.

  • 4.7 Rating
  • 60 Question(s)
  • 30 Mins of Read
  • 6081 Reader(s)

Beginner

An algorithm is a set of rules to obtain the expected output from the given input. It defines a set of rules or steps specifying the calculations, data processing, and automated reasoning, which is reusable multiple times for either one or more than one problem statement. We can also say that an algorithm is a method for calculating a function that can be represented in a finite amount of space and time.

We need algorithms in programming during analysis or development as algorithms boost the effectiveness of some existing methods. Comparing an algorithm's performance to other approaches using time and space complexity concepts is easier. Using algorithms in our work or programs provides us with a detailed description of the criteria and goals of the problems, so we are moving forward in our project systematically. Algorithms bring well-defined outputs, finiteness to our program and an unambiguous approach toward the problem statement. Along with all these, the cost of design is also optimized and reduced many times if proper algorithms are used.

To compare two algorithms solving the same problem statement, we check the complexity of both algorithms, and the one with lesser complexity is the better among the two. Complexity is the technique used to categorize the efficiency of algorithms. We have two types of complexity; one is time complexity, and another one is space complexity.

The complexity of time checks the running time of a program as a function of the size of the input—the lesser time, the better the algorithm.

The complexity of space checks the algorithms on the basis of the space they are consuming on the computer storage to perform the task. The lesser the space used, the better the algorithm.

The best case and the worst case scenarios are the asymptotic analysis of an algorithm's performance analysis. The best-case scenario can be defined as the data arrangement or data situation in which the algorithm performs the best, and the worst-case scenario, on the other hand, is the situation or arrangement of the data in which the algorithm performs the worst. For example, the best-case scenario for a general binary search problem can be the time complexity of O(1), and the worst case can be O(n^2).

This is a frequently asked question in Algorithm interview questions.  

Big O notation is an asymptotic notation to denote or define an upper bound for an algorithm by bounding the function from above. It represents the time and space complexity and is useful when we just have an upper constraint on an algorithm's time complexity.

Similar to the Big O notation, we have Big theta notation. It is also an asymptotic notation defining the exact asymptotic behavior. The theta notation binds the functions from above and below to define behavior. Dropping the low-order terms and ignoring leading constants is a convenient approach to getting the theta notation for a problem.

Sure, 

Let the two numbers be a and b, 

We need to use the XOR function or the bitwise function as in Java to swap these, and this can be done in three steps; 

  • a = a ^ b; 
  • b = a ^ b;   // this will act like b = (a ^ b) ^ b, resulting in b equals a 
  • a = a ^ b    // this will act like a = (a ^ b) ^ a, resulting in a equals b 

This way, the values of a and b are swapped using XOR without using the third variable. 

Divide and conquer is an algorithm paradigm set up so that it can handle a large amount of data, split it into smaller chunks, and determine the solution to the problem for each smaller chunk of the data. The algorithm paradigm has three parts - Divide, Conquer and Combine. These can be defined as three steps in which the paradigm processes the input data, 

  • Divide: The original problem set is separated into subproblems in this step. 
  • Conquer: The subproblems are solved individually at this step. 
  • Combine: This is the last step, where the algorithm combines the solutions to the subproblems to get the final solution to the original problem. 

A greedy algorithm is a method that aims to choose the best possible decision at each step, eventually resulting in a globally optimal solution. The greedy algorithm, in simple words, chooses the best option available at a time regardless of consequences. The main goal of this algorithm is to find the best optimal solution for the complete problem and not just the subproblems. The greedy algorithm is used in a bunch of algorithms to find the solution, and some of them can be listed as the Fractional Knapsack problem or Dijkstra's algorithm.

Sure, searching algorithms are used for searching for an element or an item from a data structure or a list of elements/items. There are 2 categories based on the type of search method, i.e., Sequential Search and Interval Search. Sequential search performs the search consecutively along the data. In contrast, Interval search performs the search by splitting the data set into two equal parts each time.

The linear search algorithm is used to search for an element in a list of elements. In linear search, the algorithm traverses the List of elements from the beginning to the end and inspects the properties of all the elements noticed along the way. The algorithm follows the following steps: 

  1. It traverses the given List of elements using a loop. 
  2. It compares the target value to the current value in the List in iteration. 
  3. If both value matches, it prints the current index of the array and moves on to the next. 
  4. Repeat steps 1 to 3 till the end of the List is reached. 

Expect to come across this popular question in DSA interview questions.  

Depth-first search is a search algorithm mainly used in trees and graphs. As the name suggests, this algorithm first visits nodes in depth before searching along the same level. To explain the processing of the algorithm, I would say, 

The algorithm starts at the root node and checks each branch as far as possible before backtracking to check the other branches. When a dead end occurs in any iteration, the algorithm traverses backward. For remembering the next vertex to start the search, it uses Stack. The output of the DFS algorithm is a spanning tree. 

We use Stack to implement the  DFS algorithm. Following are the steps to implement it. 

  1. Create a stack with the total number of vertices in the graph as the size of the Stack. 
  2. Choose any vertex as the starting point of the search. Push a view or call to that vertex and add that vertex to the Stack. 
  3. Push any non-visited adjacent vertices of this vertex to the top of the Stack. 
  4. Repeat steps 3 and 4 until there are no more adjacent vertices to visit from the vertex at the top of the Stack. 
  5. If there are no new vertices to visit, go back and pop the next one from the Stack using backtracking. 
  6. Continue using steps 3, 4, and 5 until the Stack is empty. 
  7. When the Stack is entirely unoccupied, create the final spanning Tree by deleting the unused edges of the graph. 

Okay, so we need to add a node in the middle of a sorted linked list. I am assuming that the linked List is sorted in ascending order. Now, adding a node in the middle of this linked List sorted in ascending order can be done in the following steps; 

  1. Checking the List for it not being empty. If the List is empty, then set the new node as the head and return it. 
  2. Check if the value of the node to be inserted is smaller than the value of the head node. If the value is smaller than the head node, add the node at the beginning of the linked List. 
  3. If the node value is not smaller than the head node, find the node after which the input node should be added, and check each node one by one until you reach the first node with a value greater than the input node. 
  4. Add the input node before the node found in step 3.  Adding the input node after the node found in step 3 would be done by changing the address of the pointer to the node we are adding, and the pointer of this added node will be pointing to the next node in the linked List. 

Dynamic programming is basically a recursion optimization. We can use it to optimize any recursive solution which involves repeated calls for the same input.

In dynamic programming, the goal is to save the results of the subproblems, so we need not recalculate them. Using dynamic programming to solve a problem also reduces the time complexity making the algorithm more efficient.

Swapping two numbers without using the third number is a small 3-step algorithm. It can be done if we store the sum of numbers in one number and then subtract the sum from another. For example, 

If a and b are two numbers as 

a = 2 and b = 3 

Then we can swap these two numbers getting the output as 

a = 3 and b = 2 

By following the steps, 

We have a = 2 and b = 3, 

Then the first step is to add the two numbers and store the value in variable a, 

a = a + b ; this would give us a = 5 

The second step is to subtract a and b and store the value in variable b, 

b = a - b; which will be 5 - 2 = 3 

The third or final step is to now subtract a and b and store the value in a, 

a = a - b; giving us 5 - 3 = 2 

In the end, we now have a = 2 and b = 3, as required. 

The algorithm to reverse a string gives the output by reversing the input string. For example, let the input string be "details", then the output should be "selected". The pseudocode of the algorithm can be given as this: 

  1. Initialize 2 variables i and j 
  2. Do length (string)-1 to set j at the last position of the string 
  3. Do string[0] to set i as the first character of the string. 
  4. string[i] is now interchanged with string[j] 
  5. Increment i by 1 
  6. Decrement j by 1 
  7. If i>j then go to step 2 again 
  8. stop

A hash table, as per my knowledge, is a data structure for storing values to keys of arbitrary type. The hash table has an index into an array using the hash function. These indexes are used to store the elements in the hash table. We assign each possible element to the bucket using the hash function. We can assign multiple keys to a single bucket so that all the key-value pairs are stored in the form of lists within their buckets. In simple words, the hash table contains lists mapped to strings. For each word, we add to the List at a key, or we create a new list if needed and add the word to it. This is all I know about hash tables as of now.

A must-know for anyone heading into a DS interview, this question is frequently asked in Data structure questions.  

Okay, so we can use the hash table to find all anagrams in a dictionary. For this, we have to group all the words containing the same set of letters. If we map the words to strings representing their sorted letters, we can group words into lists by using the sorted letters as the key. 

The pseudocode for this would be: 

FUNCTION find_anagrams(words) 

word_group = HashTable<String, List> 
FOR word in words 
word_group.get_or_default(sort(word),[]).push(word) 
END FOR 
anagrams = List 
FOR key, value IN word_group 
anagrams.push(value) 
END FOR 
RETURN anagrams 

It's no surprise that this one pops up often in Algorithm interview questions.  

Inserting a node in the binary search tree is a small three-step process. The first step is to check if the node which we are inserting with the root node is smaller or greater than the root node. 

Further steps of the algorithm would be: 

  • Make the root node the current node. 
  • If the node we are inserting is less than the root, 
    • If it has left child, traverse left 
    • If there is no left child, insert a node here 
  • If the node we are inserting is greater than the root, 
    • If it has the right child, traverse right 
    • If there is no right child, insert a node here 

  Okay, so counting the leaf nodes in a binary tree is also a three-step process, 

  1. If the node is null, return 0. 
  2. If it has a leaf node and the left and right both are null, return 1. 
  3. Recursively calculate the number of leaf nodes using the formula 

"No. of lead nodes = no. of leaf nodes in left subtree + no. of leaf nodes in right subtree" 

Intermediate

Encryption means transforming readable text into a secret code format known as "Ciphertext". This technique of encrypting a text uses a string of bits known as "keys" to convert the text into a secret code. The larger the key, the more possibility of potential patterns for producing the ciphertext. Most of the algorithms use fixed blocks of input with lengths ranging from 64 to 128 bits, while others use the stream technique. The three popular cryptographic algorithms are - 

  1. Hash function; also used in blockchain 
  2. Symmetric key algorithms 
  3. Asymmetric key algorithms

A common question in Data structure interview questions for freshers, don't miss this one.  

The merge sort algorithm is a comparison-based sorting algorithm. In the algorithm, the order of equal elements in the input and output is the same. The merge sort works in the following way; 

  1. It first separates the unsorted List into n sublists, each one with one element. 
  2. Then, it merges the sublists repeatedly to create new sorted sublists until only one sublist remains. This is the final sorted List. 

The time complexity of the mergesort is O(nlog(n)), where n is the size of the input list, and the space complexity is linear, O(n). 

Quicksort is also a sorting algorithm. It is quicker than the merge sort and heapsort sorting algorithms. The quick sort algorithm is based on the divide and conquer algorithmic paradigm and operates by picking a 'pivot' element from the input array and separating the other elements into two subarrays based on whether they are greater or less than the pivot. Because of this, it is also known as the partition exchange sort. Due to its recursive nature, Quicksort must be written in such a way that it can be called for a range within a bigger array, even if the end purpose is to sort the complete array. 

The following are the steps in the quicksort sorting algorithm. 

  1. Check if there are less than 2 elements. If yes, return as there is nothing to sort. 
  2. If there are more than 2 elements, choose a pivot value. The pivot value is a value that occurs in the range of the array. It can be chosen randomly. One way to choose the pivot element is to choose the median value from the first, the last, and the middle element of the array.  After this, in the next step, 
  3. Divide the input array by reordering its elements while determining a point of division so that all elements with values less than the pivot element appears before the division and all other values greater than the pivot element can appear in either direction. 
  4. Apply the Quicksort recursively to the sub-array up to the point of division and the sub-array after it, optionally removing the element equal to the pivot at the point of division from both arrays. The time complexity of the quicksort algorithm is generally O(n*logn) 

Bubble sort is again a sorting algorithm, also known as sinking sort. In bubble sort, the algorithm iterates through a list, comparing neighboring elements and swapping them if they are out of order. The List is then sent through the iterations multiple times until there are no swaps done and the input list is sorted. The bubble sort algorithm can be explained using an example. 

Let us assume an array (7 5 12 3 4 9). The process of the bubble sort algorithm to sort this array in ascending order can be explained as follows. 

First Pass: 

  • (7 5 12 3 4 9) → (5 7 12 3 4 9); Checking the first two elements and swapping them 
  • (5 7 12 3 4 9) → (5 7 12 3 4 9); Checking second and third elements and swapping if needed 
  • (5 7 12 3 4 9) → (5 7 3 12 4 9); Checking third and fourth elements and swapping if needed 
  • (5 7 3 12 4 9) → (5 7 3 4 12 9); Checking fourth and fifth elements and swapping if needed 
  • (5 7 3 4 12 9) → (5 7 3 4 9 12); Checking the last two elements and swapping if needed. 

Second Pass: 

  • (5 7 3 4 9 12) → (5 7 3 4 9 12); Checking the first two elements and swapping them 
  • (5 7 3 4 9 12) → (5 3 7 4 9 12); Checking second and third elements and swapping if needed 
  • (5 3 7 4 9 12) → (5 3 4 7 9 12); Checking third and fourth elements and swapping if needed 
  • (5 3 4 7 9 12) → (5 3 4 7 9 12); Checking fourth and fifth elements and swapping if needed 
  • (5 3 4 7 9 12) → (5 3 4 7 9 12); Checking the last two elements and swapping if needed. 

Third Pass: 

  • (5 3 4 7 9 12) → (3 5 4 7 9 12); Checking the first two elements and swapping them 
  • (3 5 4 7 9 12) → (3 4 5 7 9 12); Checking second and third elements and swapping if needed 
  • (3 4 5 7 9 12) → (3 4 5 7 9 12); Checking third and fourth elements and swapping if needed 
  • (3 4 5 7 9 12) → (3 4 5 7 9 12); Checking fourth and fifth elements and swapping if needed 
  • (3 4 5 7 9 12) → (3 4 5 7 9 12); Checking the last two elements and swapping if needed. 

At the end of the third pass, the input list is sorted in ascending order. This is all I know about the bubble sort. 

To find the maximum subarray sum for a given array, we can use Kadane's algorithm. It searches the given array from left to right, computes the largest sum at position n in the nth step, and stores this sum in a variable. The algorithm further calculates the subarray sum and keeps storing the biggest sum in the variable. In the end, the value of the variable will be the desired maximum subarray sum. 

The pseudo-code for this can be 

  • Initialize the variables: 
    • bestSum = Int_min 
    • currentSum = 0 // for empty subarray, it will be initialized as 0 
  • Set a loop for each element of the input array A 
    • currentSum = currentSum + A[i] 
    • if(bestSum < currentSum) 
      • bestSum = currentSum 
  • if(currentSum < 0) 
    • currentSum = 0 
  • Return bestSum 

Dijkstra's algorithm is used to find the shortest path between a given node in a graph to any other node in the graph. It is a five-step algorithm to calculate the distance between 2 nodes in a graph and might also be used to depict road networks.

No, we cannot use the binary search algorithm for linked lists. The reason for this is that in the linked List, random access is not allowed. We cannot reach the middle term or element of the linked list in constant or O(1) time. Thus, making the use of binary search algorithms on a linked list impossible.

One of the most frequently posed sorting interview questions, be ready for it.  

A recursive algorithm approaches a difficult problem by breaking it down into smaller subproblems until the problem is small enough to be solved quickly. A recursive algorithm involves a function that calls itself, which is why the recursive name algorithm. 

For an algorithm to be a recursive algorithm, there are three laws, 

  1. There must be a base case 
  2. It is necessary for recursive to have a function that calls itself. 
  3. The state of a recursive algorithm must be changed in order for it to return to the base case. 

If these three rules are fulfilled by an algorithm, then it can be called a recursive algorithm.

Insertion sort is a sorting technique that separates the input list into sorted and unsorted sub-lists. After separating the input list into two, as mentioned, it inserts one element at a time on the proper spot in the sorted sub-list from the unsorted one. After each iteration, one new element is added to the sorted sub-list, and one is removed from the unsorted one. In the end, the required sorted List is the sorted sub-list.

Selection sort is a sorting technique that separates the data into sorted and unsorted sub-lists. Then, the minimum element, or the smallest element from the unsorted List, is selected and placed in the sorted sub-list. The loop works until all the elements from the unsorted sub-list are shifted to the sorted sub-list giving us the final sorted List.

The space complexity of both the insertion sort and selection sort is O(1). This is because both sorting methods do not require any additional or minimal data storage. Only a single list element must be sorted.

The definition I am getting in my mind is, "The tree traversal is the process of visiting all the nodes of a tree". A tree can be traversed in three ways as 

  1. Inorder Traversal 
  2. Preorder Traversal 
  3. Postorder Traversal 

Inorder Traversal algorithm, steps can be given as: 

  1. Traverse the left subtree, i.e., call Inorder (Left → subtree) 
  2. Visit the root of the Tree. 
  3. Traverse the right subtree now that is call Inorder (right → subtree)

Preorder Traversal algorithm steps can be given as follows: 

  1. Visit the root first. 
  2. Traverse the left subtree, which means call Preorder (left → subtree) 
  3. Traverse the right subtree, which means call Preorder (right → subtree)

Postorder Traversal algorithm steps can be given as follows: 

  1. Traverse the left subtree, which means call Postorder (left → subtree) 
  2. Traverse the right subtree, which means call Postorder (right → subtree) 
  3. In the end, visit the root 

A staple in DSA interview questions, be prepared to answer this one.  

Sure, Some of the algorithms using tree traversal which I can recall at this moment are: 

  • Pre-order traversal. 
  • In order traversal 
  • ZigZag traversal 
  • Breadth-first search 
  • Post order traversal 

This question is a regular feature in Algorithm interview questions, be ready to tackle it.  

The algorithm to insert a node in a binary search tree can be explained in 3 steps. 

  • Assign the current node to the root node of the Tree. 
  • If the root node's value is greater than the value we are looking at, then: 
    • If the root node has a left child, go to the left. 
    • If there is no left child, add this node here. 
  • If the root node's value is less than the value we are looking at, then: 
    • If the root node has a right child, go to the right. 
    • If there is no right child, add this node here. 

This is the Binary search tree, as per my knowledge.

Okay, so the Heap sort is a comparison-based sorting algorithm similar to the selection sort. This algorithm separates its input into two regions: one as sorted and another one as unsorted as the first step. Then, it successively removes the items from the unsorted part by taking the largest element from it and putting it into the sorted region one by one. 

The main advantage of the heap sort algorithm is that it does not waste time scanning the unsorted part back to back multiple times as done in the selection sort. Instead, heap sort keeps the unsorted region in a heap data structure to identify the largest element in each step more rapidly. The steps of the heap sort algorithm can be explained as follows: 

  • At first, it starts by converting the List to a max heap. 
  • Then, the algorithm swaps the first and last values in the List, which reduces the range of values to be considered in the heap operation by one, and then, it filters the new first value into its heap place. 
  • This process is repeated until the range of values is considered one. 

And this is all about the heap sort algorithm. 

A skip list is a method for data structuring where it allows the algorithm to search, insert or delete the elements in a symbol table or dictionary.

In a skip list, elements are represented by nodes. The search function returns the content of the value related to the key. We can insert a new value in a key using the "insert" function and delete one using "delete". This is all I understand about the skip list.

To know if a linked list has a loop, I would use a two-pointer approach. It is because if we maintain two pointers and increase one pointer after processing 2 nodes and another after processing every node, we will encounter a situation where both the pointers will be pointing to the same node. The only reason for this is that the linked List has a loop.

Okay. I think the best solution is to use a min heap for building a min heap of the first k elements. For each element after the kth element, we would compare it with the root of the min heap. If the element is greater, it will be the root. Otherwise, it will be ignored. At the end of the List, we will have the largest element at the root of the min heap as the kth largest element.

Okay, so we have two integers, x and n. I will be writing the program in python. 

def power(x, n) 
pow = 1 
for i in range(n) 
pow = pow * x 
return pow 
#driver code 
if __name__ == ‘__main__’: 
 
x = 7 
n = 2 
#Function call 
print(power(x, n)) 

We can find the missing number in such an array by monitoring the elements of the array. The code for this will be 

def missing_no(arr, N) 
#list of zeroes 
temp = [0] * (N+1) 
for i in range(0, N): 
Temp[arr[i] - 1] = 1 
for i in range(0, N+1): 
if(temp[i] == 0): 
and = i+1 
print(ans) 
#driver code 
if __name__ = ‘__main__’: 
arr = [7, 8, 10, 11] 
N = len(arr) 
#function call 
misssing_no(arr, N)

Advanced

Okay, so a postfix expression is made up of operators and operands. The operator comes after the operand. The correct postfix phrase is 23+, which means 2+3.

This is a frequently asked question in DS algo interview questions.  

I would say that queue is a type of data structure or, should say, an abstract type of data structure that specifies a linear or an ordered list. A queue uses a First In, First Out (FIFO) operation or rule to access its elements. In a queue, insert operations can be performed only at one end, called REAR, and delete operations can be performed only at the other end, called FRONT.

Dequeue is a double-ended queue where the elements can be inserted or deleted at both ends, Front or Rear. All the other properties of a dequeue are similar to a queue, including the First In, First Out (FIFO) rule to access the elements.

Okay, first, heap and Stack are both used as a part of memory, mainly in Java. Some of the advantages of the heap over Stack are: 

  1. The heap is more flexible than the Stack because of the memory space. I heap it can be dynamically allocated and de-allocated as needed. 
  2. Another one is that heap memory is used to store objects in Java, whereas stack memory is used to store local variables and function calls. 
  3. Objects created in a heap are visible to all threads, while the variables stored in the Stack are only visible to the owner. 
  4. When we use recursion, the size of heap memory is more as compared to the Stack. The recursion fills up the stack memory very quickly. 

PUSH and POP are acronyms for operations Pushing and Popping on a Stack, respectively. These operations are the ways data is stored and retrieved from the Stack.

PUSH is used to add an element to a stack, and POP is used to remove an element from the Stack.

PUSH takes two arguments, the name of the Stack to add the data item to and the value of the entry to be added. POP only requires the name of the Stack.

When the Stack is filled, the PUSH command, when issued, gives a stack overflow error, which means that the Stack cannot accommodate any more elements. In POP, a stack underflow error is thrown if the Stack is empty.

I think a single algorithm cannot be considered best as each algorithm is designed in a way to solve a problem in a certain situation. However, I have found QuickSort to be the fastest, having the best performance for most problem statements or inputs for this case. 

There are certain advantages of Quicksort over other sorting algorithms. 

  1. Cache-efficient - QuickSort linearly scans and partitions the input making it possible to take most from every cache load. 
  2. Can quick some swaps - QuickSort is sensitive to input that is in the right order. It can skip some swaps. 
  3. Quicksort is efficient in worst-case input sets, as it works in random order. 
  4. Easy adaption to already and/or mostly sorted inputs. 
  5. In QuickSort, speeds take priority over stability. 

There are many applications of graph data structure. Some of which I am able to recall are: 

  1. Transport grids where stations are represented as vertices and routes at the edges of the graph. 
  2. Utility graphs of power, where vertices are connection points, and edges are the wires/pipes connecting them. 
  3. Social network graphs to determine the flow of information and hotspots as edges and vertices. 
  4. Neural networks where vertices represent neurons and edge the synapses between them. 

Okay, as per my knowledge, we have 

  1. Binary Tree - It is a tree where each parent has at least two offspring. The children are referred to as the left and right youngsters. 
  2. The AVL tree - is a self-balancing binary search tree. 
  3. Red and Black Tree - It is also another type of auto-balancing Tree. It helps to keep the forest in balance. 
  4. The last one is the N-ary Tree, which is a tree with a node and a maximum N number of children. An N-ary tree is one with the children of each node either as 0 or N. 

Sure. 

  • Null is a value. But VOID is a data type identifier. 
  • Null indicates an empty value for a variable, while void indicates pointers that do not have an initial size. 
  • Null means that it never existed, while void means it existed but is not in effect. 

Expect to come across this popular question in Data structure interview questions.  

Dynamic memory stores simple structured data types at runtime. It has the ability to combine the allocated structured blocks separately to form composite structures that expand and contract as needed. This helps to manage the data of data blocks of arbitrary size in arbitrary order. This is how dynamic memory allocation helps in managing data.

Some operations which we can perform on the queue are: 

  1. enqueue() - adds an element to the end of the queue 
  2. dequeue() - removes an element from the front of the queue 
  3. init() - used to initialize the queue 
  4. isEmpty - tests whether the queue is empty or not. 

These are the few functions we perform on queues that I can recall right now.

A must-know for anyone heading into an Algorithm interview, this question is frequently asked in Algorithm interview questions.  

Okay. For this, we first require two linked lists. 

For example, if the first List is 2→4→6 and the second is 1→5→7→9→11, the first List should become 2→1→4→5→6→7 and the second List should become empty. 

This is what we are aiming to achieve with the program. All the nodes of the second List should only be inserted when there are positions available. 

Okay, so to do this, we would run a loop while there are available positions in the first loop. Then, we would insert nodes of the second List by changing the pointers. The code for this can be given as 

Code: 

class Node(object): 
    def __init__(self, data:int): 
        self.data = data 
        self.next = None 
class LinkedList(object): 
    def __init__(self): 
        self.head = None   
    def push(self, new_data:int): 
        new_node = Node(new_data) 
        new_node.next = self.head 
      # Moving the head to point the new node        # new node 
        self.head = new_node  
    # Function to print linked List from head 
    def printList(self): 
        temp = self.head 
        while temp != None: 
            print(temp.data) 
            temp = temp.next 
     # Main function that inserts nodes of linked List an into b at 
    # alternate positions. Since head of first List never changes 
    # but head of second List might change, we need single 
    # pointer for first List and double pointer for second List. 
    def merge(self, a, b): 
        a_curr = a.head 
        b_curr = b.head   
        # swap their positions until one is empty 
        while a_curr != None and b_curr != None: 
            # Save next pointers 
            a_next = a_curr.next 
            b_next = b_curr.next 
            # make b_curr as next of a_curr change next pointer 
 # of b_curr 
            b_curr.next = a_next  
            # change next pointer of a_curr 
            a_curr.next = b_curr  
            # update current pointers for next iteration 
            a_curr = a_next 
            b_curr = b_next 
            b.head = b_curr 
# Driver code 
list1 = LinkedList() 
list2 = LinkedList() 
# Creating the Linked lists 
# 1. 
list1.push(6) 
list1.push(4) 
list1.push(2) 
# 2. 
for i in range(1, 5, 7, 9, 11): 
    list2.push(i) 
print("First Linked List:") 
list1.printList() 
print("Second Linked List:") 
list2.printList()  
# Merging the linked lists 
list1.merge(a = list1, b = list2) 
print("Modified first linked list:") 
list1.printList() 
print("Modified second linked list:") 
list2.printList() 

Output : 

First Linked List: 2 4 6; Second Linked List: 1 5 7 9 11; Modified First Linked List:2 1 4 5 6 7; Modified Second Linked List: 9 11

The code to implement the bubble sort algorithm will have 3 steps 

  1. Declare the array 
  2. Nest a couple of loops to compare the number in the array. 
  3. Store the array in ascending order by replacing the elements if found in any other order. 

Code: 

def bubbleSort(arr): 
    n = len(arr) 
    # optimize code, so if the array is already sorted, it 
    # doesn't need to go through the entire process 
    swapped = False 
    # Traverse through all array elements 
    for i in range(n-1): 
        # range(n) also work but outer loop will 
        # repeat one time more than needed. 
        # Last i elements are already in place 
        for j in range(0, n-i-1): 
            # traverse the array from 0 to n-i-1 
            # Swap if the element found is greater 
            # than the next element 
            if arr[j] > arr[j + 1]: 
                swapped = True 
                arr[j], arr[j + 1] = arr[j + 1], arr[j] 
         
        if not swapped: 
            # if we haven't needed to make a single swap, we 
            # can just exit the main loop. 
            return 
 
# Driver code to test above 
arr = [74, 24, 35, 12, 32, 11, 90] 
bubbleSort(arr) 
print("Sorted array is:") 
for i in range(len(arr)): 
    print("% d" % arr[i], end=" ") 

Output: 

Sorted array is: 
 11  12  24  32  35  74  90 

Okay. In insertion sort, we assume that the first element in the array is to be sorted. The second element is stored separately in the key. This sorts the first two elements. We can then take the third element and make a comparison with the ones on its left. This process will go on until a point where we sort the array. 

Code: 

def insertionSort(arr):  
    if (n := len(arr)) <= 1: 
      return 
    for i in range(1, n):   
        key = arr[i] 
        # Move elements of arr[0..i-1], that are 
        # greater than key, to one position ahead 
        # of their current position 
        j = i-1 
        while j >=0 and key < arr[j] : 
                arr[j+1] = arr[j] 
                j -= 1 
        arr[j+1] = key
#sorting the array [12, 11, 13, 5, 6] using insertionSort 
arr = [12, 11, 13, 5, 6] 
insertionSort(arr) 
print(arr) 

Okay. Inheritance is the procedure in which one class inherits the attributes and methods of another class. The class that inherits the properties of another class is called the child class, and the class whose properties are being inherited is called the parent class. Another interesting thing about inheritance is that the child class can have its own properties and methods along with the ones inherited from the parent class. 

Code: 

class Person(object):  
  # Constructor 
  def __init__(self, name, id): 
    self.name = name 
    self.id = id 
  # To check if this person is an employee 
  def Display(self): 
    print(self.name, self.id) 
class Child(Parent): 
      
    # Constructor 
    def __init__(self): 
        self.value = "Inside Child" 
          
    # Child's show method 
    def show(self): 
        print(self.value) 
 
# Driver code 
emp = Person("Satyam", 102) # An Object of Person 
emp.Display() 

It's no surprise that this one pops up often in DSA interview questions.  

Okay, So overloading is when a class has 2 or more methods with the same name. Then they are called overloaded methods. 

Code: 

def product(a, b): 
    p = a * b 
    print(p) 
# Second product method takes three argument and print their 
# product 
def product(a, b, c): 
    p = a * b*c 
    print(p) 
# Uncommenting the below line shows an error product(4, 5) 
# This line will call the second product method 
product(4, 5, 5) 
Overriding is when a superclass method is also implemented in the child class. 

Code: 

class Parent():
    # Constructor 
    def __init__(self): 
        self.value = "Inside Parent"       
    # Parent's show method 
    def show(self): 
        print(self.value)    
# Defining child class 
class Child(Parent):     
    # Constructor 
    def __init__(self): 
        self.value = "Inside Child" 
    # Child's show method 
    def show(self): 
        print(self.value)  
# Driver's code 
obj1 = Parent() 
obj2 = Child() 
obj1.show() 
obj2.show() 

The method I would follow to write the program is that first find the reverse of the input string and then compare the two; if they are the same, then it is a palindrome; otherwise, not. 

Code: 

def isPalindrome(s): 
    return s == s[::-1]
# Driver code 
s = "malayalam" 
ans = isPalindrome(s) 
if ans: 
    print("Yes") 
else: 
    print("No")

For removing all occurrences of a given character from the input string, I would use the built-in string method "replace" to replace the character with another, including symbols and spaces. 

Code: 

# initializing string 
test_str = "CheeseoverCheese" 
# initializing removal character 
rem_char = "e" 
# printing original string 
print(" The original string is: " 
      + str(test_str)) 
# Using translate() 
# Deleting all occurrences of character 
res = test_str.translate(None, rem_char) 
# printing result 
print(" quote The string after character deletion: " 
      + str(res)) 

Output: 

The original string is : CheeseoverCheese 
The string after character deletion : ChsovrChs 

Okay, so to find the middle element of a linked list, I would first traverse the linked List using 2 pointers. Then, move one pointer by one and another by two. When the fast pointer reaches the end, the slow one will reach the middle. 

Code: 

class Node: 
    # Function to initialize the node object 
    def __init__(self, data): 
        self.data = data 
        self.next = None 
class LinkedList: 
    def __init__(self): 
        self.head = None 
    def push(self, new_data): 
        new_node = Node(new_data) 
        new_node.next = self.head 
        self.head = new_node 
    # Function to get the middle of 
    # the linked List 
    def printMiddle(self): 
        slow_ptr = self.head 
        fast_ptr = self.head 
        if self.head is not None: 
            while(fast_ptr is not None and fast_ptr.next is not None): 
                fast_ptr = fast_ptr.next.next 
                slow_ptr = slow_ptr.next 
            print("The middle element is: ", slow_ptr.data) 
# Driver code 
list1 = LinkedList() 
list1.push(5) 
list1.push(4) 
list1.push(2) 
list1.push(3) 
list1.push(1) 
list1.printMiddle() 

Output: 

The middle element is:  2 

A common question in DS interview questions, don't miss this one.  

This problem statement of deleting a node in the linked List can be solved using 

The iterative method is as follows: 

To delete a node from the linked List, we need to follow three steps; 

  1. Find the previous node from the node that is to be deleted. 
  2. Change the text of the previous node. 
  3. Free memory for the node that is to be deleted. 

The code for solving this problem can be: 

Code: 

Class Node: 
    def __init__(self, data): 
        self.data = data 
        self.next = None 
class LinkedList: 
    def __init__(self): 
        self.head = None 
    # Insert new node at the beginning 
    def push(self, new_data): 
        new_node = Node(new_data) 
        new_node.next = self.head 
        self.head = new_node 
    # Given a reference to the head of a list and a key, 
    # delete the first occurrence of key in linked List 
   
    def deleteNode(self, key): 
         
        # head node 
        temp = self.head 
        # If head node is the last node 
        if (temp is not None): 
            if (temp.data == key): 
                self.head = temp.next 
                temp = None 
                return 
        # If there are more nodes after the head node   
        # Search for the key to be deleted, keep track of the 
        # previous node as we need to change 'prev.next' 
        while(temp is not None): 
            if temp.data == key: 
                break 
            prev = temp 
            temp = temp.next 
        # if key was not present in linked List 
        if(temp == None): 
            return 
        # Unlink the node from linked List 
        prev.next = temp.next 
        temp = None 
  
    # print the Linked List 
    def printList(self): 
        temp = self.head 
        while(temp): 
            print (" %d" %(temp.data)), 
            temp = temp.next 
# Driver program 
list = LinkedList() 
list.push(9) 
list.push(1) 
list.push(4) 
list.push(6) 
print ("Created Linked List: ") 
list.printList() 
list.deleteNode(1) 
print (" 
Linked List after Deletion of 1:") 
list.printList() 

Output: 

Created Linked List: 6 4 1 9; 
 Linked List after deleting the node with value as 1: 6 4 9

Description

Top Algorithm Interview Tips and Tricks

Coding and algorithm interviews are a major part of the hiring process in the technical field. Along with this, these are sometimes super tricky to crack. This section presents 10 algorithm interview tips and tricks to ace your next DSA interview questions. 

  1. Research the company well. It will give you a good idea about companies work areas. 
  2. Review potential questions based on the company and job profile you are being interviewed for. 
  3. Go Back to Basics and sharpen the conceptual questions. Conceptual clarity is important. 
  4. Choose your algorithm interview language wisely, so you are able to answer confidently. 
  5. Protect yourself from performance anxiety. It is okay to fumble or miss something. Just get back gracefully. 
  6. Memorize a quick sales pitch on yourself. The introduction is crucial. Keep it concise and focused. 
  7. Stay optimistic and be prepared for challenges. You might get tricky questions. Face them wisely. 
  8. Try to maintain a conversation throughout the interview. It is very important to keep the interview conversational. 
  9. Try to ask counter-questions to understand what the interviewer wants and present a systematic answer. In addition to this, always think out loud while solving a problem and let them know how you are processing the situation. 
  10.  Always ask good questions at the end of the interview

How to Master Algorithm and Data Structure Questions?

Data structures and algorithms are an important part of the software or most computer science positions. To prepare for an algorithm interview, you should develop a deep knowledge of data structures. Know and understand the strengths and weaknesses of different types of data structures well. Take out time to implement data structures from scratch rather than just reading about them. This would give you a better and more practical understanding of how things work practically. 

Understand the Big O Notation for both time complexity and space complexity. Make sure to know about this while you read about common algorithms and their memory and runtime complexity to learn how the Big O Notation is calculated. 

Learn and practice major searching and sorting algorithms, understand their space and time complexity, and how these can derail your optimal solution for an algorithm problem. Do not limit yourself to just searching and sorting algorithms; go further with graph, traversal, and string matching algorithms also. 

The one thing which is crucial after you have prepared well for the data structure questions is to listen and understand the problem well. Think about it and reply with all confidence and patience. Try to catch hints in the question asked to you, and if the problem is incomplete, always ask clarifying questions and then solve the problem. To prepare for Algorithm interview, you can take the Algorithm course

Companies:  

  • Amazon 
  • Google 
  • Microsoft 
  • Meta 
  • Apple  
  • IBM 
  • Walmart 
  • Accenture 
  • Infosys 
  • Tata Consultancy Services 
  • Capgemini 
  • Deloitte 

Jobs: 

  • Backend Developer  
  • Software Developer  
  • Python Developer  
  • Full Stack Developer  
  • Software Engineer  
  • Senior Software Engineer  
  • Deep Learning Engineer 
  • Automation Testing Engineer  
  • QE Automation Testing Engineer  
  • Product Developer

What to Expect in DSA Interview Questions?

Algorithm interviews are conducted to assess your skills in problem-solving using different algorithms and data structures. This is a crucial step and can be overwhelming for the candidate. This section shares some of the topics and questions that can be formulated from. These concepts, if studied and practiced well, can assure a good performance in your next algorithm interview. 

Sorting and Searching 

  1. Binary Search 
  2. Bubble Sort 
  3. Insertion Sort 
  4. Merge Sort 
  5. Heap Sort 
  6. Quick Sort 
  7. Find kth smallest/largest element in an unsorted list 
  8. Interpolation Search 
  9. Depth-first search 
  10. Breadth-first search 

Linked List 

  1. Inserting a node in Linked List 
  2. Deleting a node from a linked list 
  3. Reverse a linked list 
  4. Merge two linked lists at alternate positions 
  5. Detect and remove the loop in a linked list. 
  6. Merge sort for linked List 
  7. Select a random node from a singly linked list 
  8. Compare two strings represented by a linked list 
  9. Add two numbers represented as linked List 
  10. Reverse a list in the groups of a given size 

Trees 

  1. Maximum path sum in a binary tree 
  2. Minimum depth of a binary tree 
  3. Remove nodes on the root-to-leaf path of length less than k 
  4. Check if the binary Tree is a subtree for another binary tree 
  5. Reverse alternate levels of a perfect binary tree. 
  6. Bottom view Binary Tree 
  7. Check for a given array if it can represent the Preorder Traversal of the Binary Search Tree 

Dynamic Programming 

  1. 0-1 knapsack problem 
  2. Minimum Partition 
  3. Ways to cover a distance 
  4. Subset sum problem 
  5. Boolean Parenthesization problem 
  6. Longest common subsequence 
  7. Longest increasing subsequence 

Summary

The interview process for a position in the tech field, whether it is a developers position, backend engineer, software developer, or any other similar tech position, requires extensive knowledge and skills in algorithms and data structures. Based on these interviews, the selection or rejection of the candidates is decided majorly. To prepare more for the programming and coding questions on algorithms and data structures, you can take the best coaching for Programming. 

During the job, algorithms are a bunch of different concepts you would be using to solve problems and complete tasks. Therefore, it is very important to have sound knowledge of algorithms and make sure that you ace your next algorithms interview. This work presents 60 algorithm interview questions for beginner, intermediate, and advanced levels, 20 questions for each category, along with detailed answers for them. These are the most asked questions asked in various algorithm interviews, and if prepared well can give you a lot of knowledge and confidence in your actual interview.

The topics covered through the questions in this work are Searching algorithms, Sorting Algorithms, Linked lists, Trees, Dynamic programming, Queues, Stacks, and more. The answers to all the questions are presented in a conversational tone to give you an idea of how you should answer such questions in the interview to keep the conversation natural and not mechanical. 

We have also presented various tips and tricks on how to ace the algorithm interview questions. These are the golden tips that, if followed well, can give your interviewer a great positive impact and increase the chances of you getting selected for the position. During the interview, you should always stay calm and composed, and attentive to listen to your interviewer well and catch the hints, if any.

During the algorithm interview questions preparation, you must study as well as practice the algorithms from scratch to get a practical idea of how things work. This level of preparation for your algorithm interview would surely ensure your selection for the position of developer or software engineer with a 100% guarantee. To start preparing better, you can take the best coaching for Programming language and get hired by the best tech companies.

Read More
Levels