Contents
Finiteness: An algorithm must terminate after a finite number of steps. This means that the algorithm cannot run forever and must eventually reach a stopping point.
Definiteness: An algorithm must have a clear and unambiguous set of instructions. The instructions must be precise enough that any person or machine can understand them and carry them out correctly.
Input: An algorithm must take input. The input can be data or information that the algorithm will use to perform a task.
Output: An algorithm must produce output. The output can be the result of the algorithm’s computation or a change in state.
Effectiveness: An algorithm must be effective in solving the problem it is designed to solve. This means that the algorithm must be able to produce the correct output for any valid input, and it must do so in a reasonable amount of time and using a reasonable amount of resources.
Overall, algorithm analysis is a critical part of computer science, as it helps researchers, developers, and practitioners to design, optimize, and use algorithms effectively.
There are several ways to analyze an algorithm, but here are some common methods:
Overall, algorithm analysis is an iterative process that involves using a combination of these methods to gain a comprehensive understanding of an algorithm’s performance and behavior.
“Big O” notation is a way of expressing the time complexity and space complexity of an algorithm. It is used to describe the upper bound or worst-case scenario for how the running time or memory usage of an algorithm grows as the input size increases.
In the “Big O” notation, we write the time or space complexity as a function of the input size, n, and then ignore any lower-order terms and constant factors. For example, if an algorithm has a time complexity of O(n^2), this means that the running time of the algorithm grows no faster than the square of the input size. Similarly, if an algorithm has a space complexity of O(n), this means that the amount of memory used by the algorithm grows no faster than the input size.
“Big O” notation is useful because it provides a concise and standardized way of describing the performance of an algorithm, regardless of the specific details of the implementation. It also allows us to compare the performance of different algorithms in a way that is independent of the hardware and software used to run them.
Other common notations that are related to “Big O” notation include “Big Omega” notation, which provides a lower bound on the growth rate of a function, and “Big Theta” notation, which provides both an upper and a lower bound on the growth rate of a function.
Constant time refers to an algorithm or operation that takes the same amount of time, regardless of the size of its input. In other words, the running time of the algorithm does not depend on the input size.
In time complexity analysis, constant time is represented by the notation O(1). This notation indicates that the running time of the algorithm is bounded by a constant factor, regardless of the size of the input.
Examples of operations that typically have constant time complexity include basic arithmetic operations, accessing an element in an array or a hash table, and checking if a number is even or odd.
Constant time operations are considered highly efficient because they can be performed quickly, regardless of the input size. In some cases, algorithms can be designed to take advantage of constant time operations to optimize their overall performance.
Linear time refers to an algorithm or operation whose running time increases proportionally with the size of its input. In other words, the running time of the algorithm is directly proportional to the size of the input.
In time complexity analysis, linear time is represented by the notation O(n). This notation indicates that the running time of the algorithm grows linearly with the size of the input.
Examples of operations that typically have linear time complexity include iterating through an array or a linked list, performing a linear search or a linear scan, and copying an array of n elements.
Linear time operations are considered relatively efficient, but their running time can still become prohibitive for large input sizes. As a result, algorithms that have linear time complexity may still require optimization or improvements to their algorithmic design to achieve better performance.
Logarithmic time refers to an algorithm or operation whose running time increases logarithmically with the size of its input. In other words, the running time of the algorithm grows at a rate that is proportional to the logarithm of the input size.
In time complexity analysis, logarithmic time is represented by the notation O(log n). This notation indicates that the running time of the algorithm grows logarithmically with the size of the input.
Examples of operations that typically have logarithmic time complexity include binary search on a sorted array or a binary search tree, and certain operations on graphs and trees such as finding the lowest common ancestor.
Logarithmic time operations are considered more efficient than linear time operations because they scale more slowly with input size. For example, an algorithm with linear time complexity would take twice as long to run if the input size doubles, whereas an algorithm with logarithmic time complexity would take only a slightly longer amount of time to run. Therefore, algorithms that have logarithmic time complexity are often considered very efficient for large input sizes.
What does “grows logarithmically” mean?
When saying that an algorithm’s running time “grows logarithmically” with the input size, we mean that the running time of the algorithm increases at a rate that is proportional to the logarithm of the input size.
To understand what this means, consider an algorithm that has a logarithmic time complexity of O(log n). Let’s say the input size of the algorithm is n = 10,000. In this case, the algorithm would take roughly log(10,000) = 4 steps to complete. Now, let’s say we increase the input size to n = 1,000,000. In this case, the algorithm would take roughly log(1,000,000) = 6 steps to complete.
Notice that even though the input size increased by a factor of 100, the running time of the algorithm only increased by a factor of 2. This is because the logarithm of 1,000,000 is only twice as large as the logarithm of 10,000. In other words, the running time of the algorithm grows much more slowly than the input size.
This property of logarithmic growth is what makes algorithms with logarithmic time complexity highly efficient for large input sizes. They can process large amounts of data relatively quickly, making them useful for a wide range of applications in computer science and beyond.
Another easier example is: logâ‚‚(8) = 3, which 3 is the number by how many times 8 has to be divided before reaching a base (of 1). In this case, “8” represents the “n” in O(log n).
Here are some YouTube videos that help explain O(log n):
Quadratic time refers to an algorithm or operation whose running time increases quadratically with the size of its input. In other words, the running time of the algorithm grows at a rate that is proportional to the square of the input size.
In time complexity analysis, quadratic time is represented by the notation O(n^2). This notation indicates that the running time of the algorithm grows quadratically with the size of the input.
Examples of operations that typically have quadratic time complexity include nested loops that iterate over all pairs of elements in a two-dimensional array or a matrix, and some sorting and searching algorithms such as selection sort and bubble sort.
Quadratic time operations are considered relatively inefficient, especially for large input sizes, since their running time can become prohibitively long. As a result, algorithms that have quadratic time complexity may require optimization or improvements to their algorithmic design to achieve better performance.
It is better to avoid quadratic time algorithms whenever possible as they take longer.
Same as quadratic time but to the power of three. Represented by O(n^3).
constant time < logarithmic time < linear time < quadratic time < cubic time
Here’s a table that represents an approximate amount of time when n goes from 1 to 5.
O(1) | O(log n) | O(n) | O(n^2) | O(n^3) |
1 | 0 | 1 | 1 | 1 |
1 | 1 | 2 | 4 | 8 |
1 | 1.58 | 3 | 9 | 27 |
1 | 2 | 4 | 16 | 64 |
1 | 2.32 | 5 | 25 | 125 |
Every recursion must have a base case.
Every recursive must progress towards the base case.
Factorial of a given number
they are homogeneous, only elements of the same type can be stored.
Arrays in PHP are an ordered map. Map on associative array.
Types of arrays:
PHP arrays are dynamic. You can assimilate all sorts of data structures with PHP arrays:
For loops are best to use for accessing arrays.
Foreach, while and ArrayIterator are among other methods.
SplFixedArray: fixed size, numeric indices
Regular Array vs SplFixedArray
Sum of the array using two pointers
Maximum sum sub array
Kadane’s algo
Have any questions or comments? Write them below!