Monday, October 12, 2015

PCD PROGRAMS : Implementation of a lexical analyzer to separate subject, verb, & object from given English statement.

Input:

%{
%}

%%
[\t ]+
is |
am |
go|Mango {printf("\n%s: is a verb",yytext);}
I|she|we|he|i {printf("\n%s : is a subject ",yytext);}
eat {printf("\n%s : is a object ",yytext);}
[a-zA-z]+ {printf("\n%s: is not verb",yytext);}


%%

main()}
{
yylex();

Output:
database@database-Veriton-Series:~/Desktop$ lex frist.l
database@database-Veriton-Series:~/Desktop$ cc lex.yy.c -ll
database@database-Veriton-Series:~/Desktop$ ./a.out
i eat mango
i : is a subject 
eat : is a object 
mango: is not verb

PCD PROGRAM : Implementation of a lexical analyzer to count total number of words, character,special Symbolcount and line from given.

Input:



%{
#include<stdio.h>
int w=0,d=0,ln=0,c=0;
int s=0;
%}
%%
[A-Z|a-z]+ {w++; c=c+yyleng;}
[0-9]+ {d++;}
[\n] {ln++;}
. {s=s+yyleng;}
%%
int main(int argc, char *argv[50])
{
FILE *fp;
yylex();
printf(" Wordcount= %d\n Numbercount= %d\n",w,d);
printf(" Charcount= %d\n Linecount= %d\n",c,ln);
printf(" specialSymbolcount= %d\n",s);
}

Output:
database@database-Veriton-Series:~/Desktop$ lex second.l
database@database-Veriton-Series:~/Desktop$ cc lex.yy.c -ll
database@database-Veriton-Series:~/Desktop$ ./a.out
more.mdsb@gmail.com
 Wordcount= 4
 Numbercount= 0
 Charcount= 16
 Linecount= 1
 specialSymbolcount= 3
note: for output display press Ctrl+d

DAA PROGRAM BINARY SEARCH

DAA PROGRAM BINARY SEARCH

theory:

Binary search algorithm

Generally, to find a value in unsorted array, we should look through elements of an array one by one, until searched value is found. In case of searched value is absent from array, we go through all elements. In average, complexity of such an algorithm is proportional to the length of the array.
Situation changes significantly, when array is sorted. If we know it, random access capability can be utilized very efficiently to find searched value quick. Cost of searching algorithm reduces to binary logarithm of the array length. For reference, log2(1 000 000) ≈ 20. It means, that in worst case, algorithm makes 20 steps to find a value in sorted array of a million elements or to say, that it doesn't present it the array.

Algorithm

Algorithm is quite simple. It can be done either recursively or iteratively:
  1. get the middle element;
  2. if the middle element equals to the searched value, the algorithm stops;
  3. otherwise, two cases are possible:
    • searched value is less, than the middle element. In this case, go to the step 1 for the part of the array, before middle element.
    • searched value is greater, than the middle element. In this case, go to the step 1 for the part of the array, after middle element.
Now we should define, when iterations should stop. First case is when searched element is found. Second one is when subarray has no elements. In this case, we can conclude, that searched value doesn't present in the array.

Examples

Example 1. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 > 6):     -1  5  6  18  19  25  46  78  102  114
Step 2 (middle element is 5 < 6):      -1  5  6  18  19  25  46  78  102  114
Step 3 (middle element is 6 == 6):     -1  5  6  18  19  25  46  78  102  114
Example 2. Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Step 1 (middle element is 19 < 103):   -1  5  6  18  19  25  46  78  102  114
Step 2 (middle element is 78 < 103):   -1  5  6  18  19  25  46  78  102  114
Step 3 (middle element is 102 < 103):  -1  5  6  18  19  25  46  78  102  114
Step 4 (middle element is 114 > 103):  -1  5  6  18  19  25  46  78  102  114
Step 5 (searched value is absent):     -1  5  6  18  19  25  46  78  102  114

Complexity analysis

Huge advantage of this algorithm is that it's complexity depends on the array size logarithmically in worst case. In practice it means, that algorithm will do at most log2(n) iterations, which is a very small number even for big arrays. It can be proved very easily. Indeed, on every step the size of the searched part is reduced by half. Algorithm stops, when there are no elements to search in. Therefore, solving following inequality in whole numbers:
n / 2iterations > 0
resulting in
iterations <= log2(n).
It means, that binary search algorithm time complexity is O(log2(n)).
Input: 
#include<stdio.h>
int binarySearch(int a[10],int key,int low,int upper);
int main()
{
     int a[10],i,key,upper=9,lower=0,pos;
    printf("Enter the 10 Elements:");
    for(i=0;i<10;i++)
      {
        scanf("%d",&a[i]);
      }
    printf("Enter the key Element:");
    scanf("%d",&key);
    pos= binarySearch(a,key,lower,upper);
    if(pos==-1)
      printf("\n\t Key Element is not found:");
    else
      printf("key element found at %d location",pos+1);
        return 0;
}

int binarySearch(int a[10],int key,int lower,int upper)
{
  if(lower<=upper)
    {
      int mid=(lower+upper)/2;
      if(a[mid]==key)
      return mid;
      else
      if (a[mid]>key)
        
             return(binarySearch(a,key,lower,mid-1));
             else
            return(binarySearch(a,key,mid+1,upper));
     }
        
    return -1;
  
}
   
Output:
pgcomp@pgcomp-OptiPlex-745:~$ gcc binary.c
pgcomp@pgcomp-OptiPlex-745:~$ ./a.out
Enter the 10 Elements:2
4
5
6
9
8
5
1
3
7
Enter the key Element:5
key element found at 3

DAA(Using Divide & Conquer Strategies Design a Class for Concurrent Quick Sort Using C++)

DAA(Using Divide & Conquer Strategies Design a Class for Concurrent Quick Sort Using C++)

Theory:

 
Like merge sort, quicksort uses divide-and-conquer, and so it's a recursive algorithm. The way that quicksort uses divide-and-conquer is a little different from how merge sort does. In merge sort, the divide step does hardly anything, and all the real work happens in the combine step. Quicksort is the opposite: all the real work happens in the divide step. In fact, the combine step in quicksort does absolutely nothing.
Quicksort has a couple of other differences from merge sort. Quicksort works in place. And its worst-case running time is as bad as selection sort's and insertion sort's: Θ(n2)Θ(n2). But its average-case running time is as good as merge sort's: Θ(nlgn)Θ(nlgn). So why think about quicksort when merge sort is at least as good? That's because the constant factor hidden in the big-Θ notation for quicksort is quite good. In practice, quicksort outperforms merge sort, and it significantly outperforms selection sort and insertion sort.
Here is how quicksort uses divide-and-conquer. As with merge sort, think of sorting a subarray array[p..r], where initially the subarray is array[0..n-1].

One step of divide-and-conquer
  1. Divide by choosing any element in the subarray array[p..r]. Call this element the pivot. Rearrange the elements in array[p..r] so that all other elements in array[p..r] that are less than or equal to the pivot are to its left and all elements in array[p..r] are to the pivot's right. We call this procedure partitioning. At this point, it doesn't matter what order the elements to the left of the pivot are in relative to each other, and the same holds for the elements to the right of the pivot. We just care that each element is somewhere on the correct side of the pivot.
    As a matter of practice, we'll always choose the rightmost element in the subarray, array[r], as the pivot. So, for example, if the subarray consists of [9, 7, 5, 11, 12, 2, 14, 3, 10, 6], then we choose 6 as the pivot. After partitioning, the subarray might look like [5, 2, 3, 6, 12, 7, 14, 9, 10, 11]. Let q be the index of where the pivot ends up.

  2. Conquer by recursively sorting the subarrays array[p..q-1] (all elements to the left of the pivot, which must be less than or equal to the pivot) and array[q+1..r] (all elements to the right of the pivot, which must be greater than the pivot).

  3. Combine by doing nothing. Once the conquer step recursively sorts, we are done. Why? All elements to the left of the pivot, in array[p..q-1], are less than or equal to the pivot and are sorted, and all elements to the right of the pivot, in array[q+1..r], are greater than the pivot and are sorted. The elements in array[p..r] can't help but be sorted!
    Think about our example. After recursively sorting the subarrays to the left and right of the pivot, the subarray to the left of the pivot is [2, 3, 5], and the subarray to the right of the pivot is [7, 9, 10, 11, 12, 14]. So the subarray has [2, 3, 5], followed by 6, followed by [7, 9, 10, 11, 12, 14]. The subarray is sorted.
The base cases are subarrays of fewer than two elements, just as in merge sort. In merge sort, you never see a subarray with no elements, but you can in quicksort, if the other elements in the subarray are all less than the pivot or all greater than the pivot.
Let's go back to the conquer step and walk through the recursive sorting of the subarrays. After the first partition, we have have subarrays of [5, 2, 3] and [12, 7, 14, 9, 10, 11], with 6 as the pivot.
To sort the subarray [5, 2, 3], we choose 3 as the pivot. After partitioning, we have [2, 3, 5]. The subarray [2], to the left of the pivot, is a base case when we recurse, as is the subarray [5], to the right of the pivot.
To sort the subarray [12, 7, 14, 9, 10, 11], we choose 11 as the pivot, resulting in [7, 9, 10] to the left of the pivot and [14, 12] to the right. After these subarrays are sorted, we have [7, 9, 10], followed by 11, followed by [12, 14].
Here is how the entire quicksort algorithm unfolds. Array locations in blue have been pivots in previous recursive calls, and so the values in these locations will not be examined or moved again:
Quicksort 
complexity: 
 



 Input:

#define _XOPEN_SOURCE 600
#include<iostream>
#include <stdlib.h>
#include <pthread.h>
using namespace std;
// Macro for swapping two values.
#define SWAP(x,y) do {\
    __typeof__(x) tmp = x;\
    x = y;\
    y = tmp;\
} while(0)

/**
 * Partition the array.  Takes the index of the pivot point as the pivot
 * argument.  Puts all of the values lower than this point into the first part
 * of the array and returns the new location of the pivot point.
 */
int partition(int *array, int left, int right, int pivot)
{
    int pivotValue = array[pivot];
    SWAP(array[pivot], array[right]);
    int storeIndex = left;
    for (int i=left ; i<right ; i++)
    {
        if (array[i] <= pivotValue)
        {
            SWAP(array[i], array[storeIndex]);
            storeIndex++;
        }
    }
    SWAP(array[storeIndex], array[right]);
    return storeIndex;
}
/**
 * Structure containing the arguments to the concurrent_quicksort function.  Used
 * when starting it in a new thread, because pthread_create() can only pass one
 * (pointer) argument.
 */
struct qsort_starter
{
    int *array;
    int left;
    int right;
};

void concurrent_quicksort(int *array, int left, int right);

/**
 * Thread trampoline that extracts the arguments from a qsort_starter structure
 * and calls concurrent_quicksort.
 */
void* quicksort_thread(void *init)
{
    struct qsort_starter *start = init;
    concurrent_quicksort(start->array, start->left, start->right);
    return NULL;
}
/**
 * Parallel version of the quicksort function. 
 */
void concurrent_quicksort(int *array, int left, int right)
{
    if (right > left)
    {
        int pivotIndex = left + (right - left)/2;
        pivotIndex = partition(array, left, right, pivotIndex);  
            // Create the thread for the first recursive call
            struct qsort_starter arg = {array, left, pivotIndex-1};
            pthread_t thread;
            int ret = pthread_create(&thread, NULL, quicksort_thread, &arg);
            // Perform the second recursive call in this thread
            concurrent_quicksort(array, pivotIndex+1, right);
            // Wait for the first call to finish.
            pthread_join(thread, NULL);
            }
}

int main(int argc, char **argv)
{
    int values[10]={5,6,8,9,1,2,3,0,4,7};
    concurrent_quicksort(values, 0, 9);
     for (int i=0 ; i<10 ; i++)
    {
        cout<<"\t"<<values[i];
    }
}



Output:


pgcomp@pgcomp-OptiPlex-745:~$ g++ CQS.cpp -lpthread -fpermissive
CQS.cpp: In function ‘void* quicksort_thread(void*)’:
CQS.cpp:57:35: warning: invalid conversion from ‘void*’ to ‘qsort_starter*’ [-fpermissive]
     struct qsort_starter *start = init;
                                   ^
pgcomp@pgcomp-OptiPlex-745:~$ ./a.out
    0    1    2    3    4    5    6    7    8    9