Skip to content

Commit

Permalink
Merge branch 'yash/fix-1335' of https://github.com/yashksaini-coder/C
Browse files Browse the repository at this point in the history
…into yash/fix-1335
  • Loading branch information
yashksaini-coder committed Oct 29, 2024
2 parents b52165c + e7ce5a0 commit 6c62a82
Show file tree
Hide file tree
Showing 15 changed files with 732 additions and 0 deletions.
41 changes: 41 additions & 0 deletions 1D Arrays/Intersectionoftwoarrays/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Intersection of Two Arrays in C

This project contains a C program that calculates the intersection of two integer arrays, provided by the user. The program takes the sizes and elements of two arrays as input, finds their intersection, and outputs the common elements.

## Process

1. **User Input**:
- The user is prompted to enter the number of elements and the elements themselves for two arrays, `arr1` and `arr2`.

2. **Intersection Calculation**:
- The function `findIntersection` iterates through each element of `arr1` and checks if it exists in `arr2` using a nested `while` loop.
- If an element in `arr1` matches any element in `arr2`, it is printed as part of the intersection.

3. **Output**:
- The program prints the elements found in both arrays, representing the intersection.

### Example
Consider the example arrays `{1, 2, 3, 4, 5}` and `{1, 2, 3}`.

1. **Input**:
- `arr1 = {1, 2, 3, 4, 5}`
- `arr2 = {1, 2, 3}`

2. **Output**:
-Intersection of the two arrays: `{1, 2, 3}`

## Complexity Analysis

### Time Complexity
- **Worst Case**: \(O(n1 \times n2)\), where `n1` is the length of `arr1` and `n2` is the length of `arr2`.
- This is because each element in `arr1` is checked against every element in `arr2`, resulting in a nested loop.

### Space Complexity
- **Space Complexity**: \(O(1)\) (excluding the input arrays).
- The program does not use any extra storage for results other than printing directly.

### Assumptions
- The program assumes the arrays contain only integer values.
- It does not handle duplicates in the output; if duplicates are present in the input arrays, each duplicate will be checked independently.


45 changes: 45 additions & 0 deletions 1D Arrays/Intersectionoftwoarrays/program.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#include <stdio.h>

void findIntersection(int arr1[], int arr2[], int n1, int n2) {
printf("Intersection of the two arrays: ");
int i = 0;
while (i < n1) {
int j = 0;
while (j < n2) {
if (arr1[i] == arr2[j]) {
printf("%d ", arr1[i]);
break; // Move to the next element in arr1 after finding a match
}
j++;
}
i++;
}
printf("\n");
}

int main() {
int n1, n2;

// Take the size of the first array from the user
printf("Enter the number of elements in the first array: ");
scanf("%d", &n1);
int arr1[n1];
printf("Enter the elements of the first array:\n");
for (int i = 0; i < n1; i++) {
scanf("%d", &arr1[i]);
}

// Take the size of the second array from the user
printf("Enter the number of elements in the second array: ");
scanf("%d", &n2);
int arr2[n2];
printf("Enter the elements of the second array:\n");
for (int i = 0; i < n2; i++) {
scanf("%d", &arr2[i]);
}

// Find and print the intersection of the two arrays
findIntersection(arr1, arr2, n1, n2);

return 0;
}
69 changes: 69 additions & 0 deletions Backtracking Algorithms/Word Search in a 2D Grid/Program.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#include <stdio.h>
#include <stdbool.h>
#include <string.h>

#define ROWS 3
#define COLS 4

// Struct to store board and word details
typedef struct {
char board[ROWS][COLS];
int rows;
int cols;
} Board;

// Helper function for backtracking
bool backtrack(Board *b, const char *word, int index, int i, int j) {
if (index == strlen(word)) {
return true; // All characters in the word have been found
}
// Check if the current position is within the board boundaries and if the character matches
if (i < 0 || j < 0 || i >= b->rows || j >= b->cols || b->board[i][j] != word[index]) {
return false; // Out of bounds or mismatch
}

char temp = b->board[i][j];
b->board[i][j] = '*'; // Mark the current cell as visited

// Explore neighboring cells
bool exist = backtrack(b, word, index + 1, i - 1, j) ||
backtrack(b, word, index + 1, i, j - 1) ||
backtrack(b, word, index + 1, i + 1, j) ||
backtrack(b, word, index + 1, i, j + 1);

b->board[i][j] = temp; // Restore the cell

return exist;
}

// Function to check if a word exists in the board
bool exist(Board *b, const char *word) {
for (int i = 0; i < b->rows; i++) {
for (int j = 0; j < b->cols; j++) {
if (backtrack(b, word, 0, i, j)) {
return true;
}
}
}
return false;
}

int main() {
Board b = {
{
{'A', 'B', 'C', 'E'},
{'S', 'F', 'C', 'S'},
{'A', 'D', 'E', 'E'}
},
ROWS, COLS
};
const char *word1 = "ABCCED";
const char *word2 = "SEE";
const char *word3 = "ABCB";

printf("Word \"%s\" exists: %s\n", word1, exist(&b, word1) ? "true" : "false");
printf("Word \"%s\" exists: %s\n", word2, exist(&b, word2) ? "true" : "false");
printf("Word \"%s\" exists: %s\n", word3, exist(&b, word3) ? "true" : "false");

return 0;
}
46 changes: 46 additions & 0 deletions Backtracking Algorithms/Word Search in a 2D Grid/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
### Word Search in 2D Grid

### Overview

This program determines if a given word can be found in a 2D grid of letters. The word can be constructed from letters in the grid by moving horizontally or vertically from one letter to another adjacent letter. The same cell cannot be used more than once in the word search.

### Problem Definition

Given a 2D board of characters and a word, the task is to determine if the word exists in the grid. The word can be constructed by sequentially adjacent cells (horizontally or vertically). Each cell in the grid can be used only once per word search.

**Input**: A 2D board of characters and a string `word`.

**Output**: A boolean value - `true` if the word exists in the grid, otherwise `false`.

### Algorithm Overview

1. **Iterate Through the Grid**: The program starts by iterating over each cell in the grid. For each cell, if the character matches the first letter of the word, it initiates a recursive search from that cell.

2. **Recursive Backtracking (`backtrack`)**:
- The function `backtrack` is a recursive function that checks if the current cell matches the current character in the word.
- If the entire word is found (index reaches the length of the word), the function returns `true`.
- Otherwise, the function marks the cell as visited by temporarily changing its value.
- It then recursively checks the adjacent cells (up, down, left, and right) for the next character in the word.
- If a path is found, it returns `true`; otherwise, it restores the cell's original value (backtracking) and continues the search.

3. **Boundary and Matching Checks**: During the search, boundary conditions are checked to ensure the search does not go out of bounds, and that each cell matches the corresponding character in the word.

### Example

Let the `board` be:
![alt text](image.png)

Let `word = "ABCCED"`.

**Steps**:
1. The search starts at cell `[0][0]` ('A') and proceeds with adjacent cells to form the word "ABCCED".
2. Through recursive calls, the word is found starting from cell `[0][0]`, moving right and down to cover the entire word.

**Output**: `true`

### Edge Cases

1. **Empty Grid**: If the board or word is empty, the function returns `false`.
2. **Word Not in Grid**: If no path can form the word, the function returns `false`.
3. **Repeated Letters in Word**: The word can contain repeated letters, but each cell in the grid can be used only once per word search.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
77 changes: 77 additions & 0 deletions Machine_Learning_Algorithms/SVM/Program.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
#include <stdio.h>
#include <stdlib.h>

#define LEARNING_RATE 0.01
#define EPOCHS 1000
#define LAMBDA 0.01 // Regularization parameter

// Sample data structure for a data point
typedef struct {
double x1;
double x2;
int label;
} DataPoint;

// Function to initialize weights
void initializeWeights(double *weights, int size) {
for (int i = 0; i < size; i++) {
weights[i] = 0.0;
}
}

// Function to calculate dot product
double dotProduct(double *weights, DataPoint point) {
return weights[0] * point.x1 + weights[1] * point.x2;
}

// SVM training function using Stochastic Gradient Descent
void trainSVM(DataPoint *data, int dataSize, double *weights) {
for (int epoch = 0; epoch < EPOCHS; epoch++) {
for (int i = 0; i < dataSize; i++) {
DataPoint point = data[i];
double y = point.label;
double prediction = dotProduct(weights, point);

// Update rule for SVM
if (y * prediction < 1) {
weights[0] += LEARNING_RATE * ((y * point.x1) - (2 * LAMBDA * weights[0]));
weights[1] += LEARNING_RATE * ((y * point.x2) - (2 * LAMBDA * weights[1]));
} else {
weights[0] += LEARNING_RATE * (-2 * LAMBDA * weights[0]);
weights[1] += LEARNING_RATE * (-2 * LAMBDA * weights[1]);
}
}
}
}

// Function to make predictions
int predict(double *weights, DataPoint point) {
double prediction = dotProduct(weights, point);
return (prediction >= 0) ? 1 : -1;
}

int main() {
// Training data (XOR dataset)
DataPoint data[] = {
{2, 3, 1},
{1, 1, -1},
{2, 1, -1},
{3, 2, 1},
{3, 3, 1},
{1, 2, -1}
};
int dataSize = sizeof(data) / sizeof(data[0]);

double weights[2];
initializeWeights(weights, 2);

// Train the SVM model
trainSVM(data, dataSize, weights);

// Test the SVM model
DataPoint testPoint = {3, 3, 1};
int prediction = predict(weights, testPoint);
printf("Prediction for test point (3, 3): %d\n", prediction);

return 0;
}
40 changes: 40 additions & 0 deletions Machine_Learning_Algorithms/SVM/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Support Vector Machine (SVM) Algorithm in Machine Learning

## Description
Support Vector Machine (SVM) is a powerful supervised machine learning algorithm used for both classification and regression tasks. SVM aims to find a hyperplane in an N-dimensional space (where N is the number of features) that distinctly classifies the data points. It works by maximizing the margin between data points of different classes, which makes it particularly useful for high-dimensional spaces.

## Key Features
- **Robust Classification**: Efficiently handles linear and non-linear classification problems.
- **Effective in High Dimensions**: Works well in scenarios with a large number of features compared to the number of samples.
- **Versatile Kernel Functions**: Offers multiple kernel options (linear, polynomial, RBF) to transform data for optimal separation.
- **Regularization Parameter (C)**: Balances margin maximization and error minimization for optimal generalization.

## Problem Definition
Given a set of labeled data points, SVM attempts to determine a hyperplane that best divides the data into two classes. For example, given customer purchase data, an SVM can classify potential customers as likely or unlikely to purchase a product.

Mathematically, the problem can be defined as finding a hyperplane that separates data points with maximum margin, which is the distance between the hyperplane and the nearest data point from either class.

## Algorithm Review
1. **Linear SVM**: If the data is linearly separable, SVM finds a straight line (hyperplane) to separate classes.
2. **Non-Linear SVM**: For non-linearly separable data, SVM uses kernel tricks to transform the data into a higher dimension where a linear separator can be applied.
3. **Choosing the Kernel**: SVM allows different kernel functions to map data to higher dimensions:
- **Linear Kernel**: Used for linearly separable data.
- **Polynomial Kernel**: A polynomial curve separator for non-linear data.
- **Radial Basis Function (RBF) Kernel**: A Gaussian kernel suited for complex boundaries.
4. **Regularization Parameter (C)**: Controls the trade-off between maximizing the margin and minimizing classification errors.

## Time Complexity
The time complexity of training an SVM depends on the number of samples (N) and the number of features (d):
- For a **linear SVM**, the training time complexity is approximately `O(N * d)`.
- For a **non-linear SVM** (e.g., using RBF kernel), the complexity can go up to `O(N^2 * d)` due to the more intensive computations required for non-linear transformations.

## Applications
- **Image Classification**: SVM is widely used for recognizing images, such as facial recognition, hand-written digit recognition, and object classification.
- **Text Classification**: SVM is highly effective in text categorization, especially for spam detection and sentiment analysis.
- **Bioinformatics**: Used in classifying genes, analyzing proteins, and other biological datasets.
- **Customer Segmentation**: Helps businesses classify customers based on purchase behaviors for targeted marketing.

## Conclusion
SVM is a highly effective algorithm for both linear and non-linear classification tasks. Its flexibility with kernel functions and effectiveness in high-dimensional spaces make it a valuable tool in the machine learning toolkit. However, its computational complexity can be a limitation in large datasets, making kernel selection and regularization parameter tuning essential for optimal performance.

For an in-depth understanding, refer to the documentation or consult resources on kernel functions and hyperparameter tuning.
Loading

0 comments on commit 6c62a82

Please sign in to comment.