Skip to content

Commit

Permalink
Added Lab-3
Browse files Browse the repository at this point in the history
  • Loading branch information
rishitsaiya authored Apr 19, 2021
1 parent 8d70c67 commit ab6d211
Show file tree
Hide file tree
Showing 67 changed files with 20,995 additions and 0 deletions.
Binary file added Lab-3/180010027.pdf
Binary file not shown.
Binary file added Lab-3/180010027_lab3.zip
Binary file not shown.
Binary file added Lab-3/Laboratory 3.pdf
Binary file not shown.
Binary file added Lab-3/byte-unixbench-mod.zip
Binary file not shown.
339 changes: 339 additions & 0 deletions Lab-3/byte-unixbench-mod/LICENSE.txt

Large diffs are not rendered by default.

92 changes: 92 additions & 0 deletions Lab-3/byte-unixbench-mod/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
## mod at IIT Dharwad
This is a modified version of the Unixbench suite. The original benchmarks were designed as "fixed duration" benchmarks: they each repeat a certain pattern over and over until a timer expires. The modification was done to make some of the benchmarks (namely, arith, fstime, pipe, spawn, syscall) as "fixed work": they do a certain fixed amount of work, regardless of the time it takes to do that work.

# byte-unixbench

**UnixBench** is the original BYTE UNIX benchmark suite, updated and revised by many people over the years.

The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple
tests are used to test various aspects of the system's performance. These test results are then compared to the
scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores.
The entire set of index values is then combined to make an overall index for the system.

Some very simple graphics tests are included to measure the 2D and 3D graphics performance of the system.

Multi-CPU systems are handled. If your system has multiple CPUs, the default behaviour is to run the selected tests
twice -- once with one copy of each test program running at a time, and once with N copies, where N is the number of
CPUs. This is designed to allow you to assess:

* the performance of your system when running a single task
* the performance of your system when running multiple tasks
* the gain from your system's implementation of parallel processing

Do be aware that this is a system benchmark, not a CPU, RAM or disk benchmark. The results will depend not only on
your hardware, but on your operating system, libraries, and even compiler.

## History

**UnixBench** was first started in 1983 at Monash University, as a simple synthetic benchmarking application. It
was then taken and expanded by **Byte Magazine**. Linux mods by Jon Tombs, and original authors Ben Smith,
Rick Grehan, and Tom Yager. The tests compare Unix systems by comparing their results to a set of scores set
by running the code on a benchmark system, which is a SPARCstation 20-61 (rated at 10.0).

David C. Niemi maintained the program for quite some time, and made some major modifications and updates,
and produced **UnixBench 4**. He later gave the program to Ian Smith to maintain. Ian subsequently made
some major changes and revised it from version 4 to version 5.

Thanks to Ian Smith for managing the release up to 5.1.3. As of the next release (5.2), [Anthony F. Voellm](https://github.com/voellm) is going to help maintain the code base. The releases will happen once there are enough pull requests to warrant a new release.

The general process will be the following:

* Open a bug announcing that a new release will happen.
* Everything on the `dev` branch will be run.
* Code will move from the `dev` branch into `main` and be tagged. Bug fix releases with increment the subversion and major functionality changes will increase the major version.

## Included Tests

UnixBench consists of a number of individual tests that are targeted at specific areas. Here is a summary of what
each test does:

### Dhrystone

Developed by Reinhold Weicker in 1984. This benchmark is used to measure and compare the performance of computers. The test focuses on string handling, as there are no floating point operations. It is heavily influenced by hardware and software design, compiler and linker options, code optimization, cache memory, wait states, and integer data types.

### Whetstone

This test measures the speed and efficiency of floating-point operations. This test contains several modules that are meant to represent a mix of operations typically performed in scientific applications. A wide variety of C functions including `sin`, `cos`, `sqrt`, `exp`, and `log` are used as well as integer and floating-point math operations, array accesses, conditional branches, and procedure calls. This test measure both integer and floating-point arithmetic.

### `execl` Throughput

This test measures the number of `execl` calls that can be performed per second. `execl` is part of the exec family of functions that replaces the current process image with a new process image. It and many other similar commands are front ends for the function `execve()`.

### File Copy

This measures the rate at which data can be transferred from one file to another, using various buffer sizes. The file read, write and copy tests capture the number of characters that can be written, read and copied in a specified time (default is 10 seconds).

### Pipe Throughput

A pipe is the simplest form of communication between processes. Pipe throughput is the number of times (per second) a process can write 512 bytes to a pipe and read them back. The pipe throughput test has no real counterpart in real-world programming.

### Pipe-based Context Switching

This test measures the number of times two processes can exchange an increasing integer through a pipe. The pipe-based context switching test is more like a real-world application. The test program spawns a child process with which it carries on a bi-directional pipe conversation.

### Process Creation

This test measure the number of times a process can fork and reap a child that immediately exits. Process creation refers to actually creating process control blocks and memory allocations for new processes, so this applies directly to memory bandwidth. Typically, this benchmark would be used to compare various implementations of operating system process creation calls.

### Shell Scripts

The shells scripts test measures the number of times per minute a process can start and reap a set of one, two, four and eight concurrent copies of a shell scripts where the shell script applies a series of transformation to a data file.

### System Call Overhead

This estimates the cost of entering and leaving the operating system kernel, i.e., the overhead for performing a system call. It consists of a simple program repeatedly calling the `getpid` (which returns the process id of the calling process) system call. The time to execute such calls is used to estimate the cost of entering and exiting the kernel.

### Graphical Tests

Both 2D and 3D graphical tests are provided; at the moment, the 3D suite in particular is very limited, consisting of the `ubgears` program. These tests are intended to provide a very rough idea of the system's 2D and 3D graphics performance. Bear in mind, of course, that the reported performance will depend not only on hardware, but on whether your system has appropriate drivers for it.

# License

This project is released under the [GPL v2](LICENSE.txt) license.
306 changes: 306 additions & 0 deletions Lab-3/byte-unixbench-mod/UnixBench/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,306 @@
##############################################################################
# UnixBench v5.1.3
# Based on The BYTE UNIX Benchmarks - Release 3
# Module: Makefile SID: 3.9 5/15/91 19:30:15
#
##############################################################################
# Bug reports, patches, comments, suggestions should be sent to:
# David C Niemi <niemi@tux.org>
#
# Original Contacts at Byte Magazine:
# Ben Smith or Tom Yager at BYTE Magazine
# bensmith@bytepb.byte.com tyager@bytepb.byte.com
#
##############################################################################
# Modification Log: 7/28/89 cleaned out workload files
# 4/17/90 added routines for installing from shar mess
# 7/23/90 added compile for dhrystone version 2.1
# (this is not part of Run file. still use old)
# removed HZ from everything but dhry.
# HZ is read from the environment, if not
# there, you must define it in this file
# 10/30/90 moved new dhrystone into standard set
# new pgms (dhry included) run for a specified
# time rather than specified number of loops
# 4/5/91 cleaned out files not needed for
# release 3 -- added release 3 files -ben
# 10/22/97 added compiler options for strict ANSI C
# checking for gcc and DEC's cc on
# Digital Unix 4.x (kahn@zk3.dec.com)
# 09/26/07 changes for UnixBench 5.0
# 09/30/07 adding ubgears, GRAPHIC_TESTS switch
# 10/14/07 adding large.txt
# 01/13/11 added support for parallel compilation
# 01/07/16 [refer to version control commit messages and
# cease using two-digit years in date formats]
##############################################################################

##############################################################################
# CONFIGURATION
##############################################################################

SHELL = /bin/sh

# GRAPHIC TESTS: Uncomment the definition of "GRAPHIC_TESTS" to enable
# the building of the graphics benchmarks. This will require the
# X11 libraries on your system. (e.g. libX11-devel mesa-libGL-devel)
#
# Comment the line out to disable these tests.
# GRAPHIC_TESTS = defined

# Set "GL_LIBS" to the libraries needed to link a GL program.
GL_LIBS = -lGL -lXext -lX11


# COMPILER CONFIGURATION: Set "CC" to the name of the compiler to use
# to build the binary benchmarks. You should also set "$cCompiler" in the
# Run script to the name of the compiler you want to test.
#CC=gcc
CC=clang

# OPTIMISATION SETTINGS:
# Use gcc option if defined UB_GCC_OPTIONS via "Environment variable" or "Command-line arguments".
ifdef UB_GCC_OPTIONS
OPTON = $(UB_GCC_OPTIONS)

else
## Very generic
#OPTON = -O

## For Linux 486/Pentium, GCC 2.7.x and 2.8.x
#OPTON = -O2 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math \
# -m486 -malign-loops=2 -malign-jumps=2 -malign-functions=2

## For Linux, GCC previous to 2.7.0
#OPTON = -O2 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math -m486

#OPTON = -O2 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math \
# -m386 -malign-loops=1 -malign-jumps=1 -malign-functions=1

## For Solaris 2, or general-purpose GCC 2.7.x
#OPTON = -O2 -fomit-frame-pointer -fforce-addr -ffast-math -Wall

## For Digital Unix v4.x, with DEC cc v5.x
#OPTON = -O4
#CFLAGS = -DTIME -std1 -verbose -w0

## gcc optimization flags
## (-ffast-math) disables strict IEEE or ISO rules/specifications for math funcs
#OPTON = -O3 -ffast-math
OPTON = -O0 -ffast-math

## OS detection. Comment out if gmake syntax not supported by other 'make'.
OSNAME:=$(shell uname -s)
ARCH := $(shell uname -p)
ifeq ($(OSNAME),Linux)
# Not all CPU architectures support "-march" or "-march=native".
# - Supported : x86, x86_64, ARM, AARCH64, etc..
# - Not Supported: RISC-V, IBM Power, etc...
ifneq ($(ARCH),$(filter $(ARCH),ppc64 ppc64le))
OPTON += -march=native -mtune=native
else
OPTON += -mcpu=native -mtune=native
endif
endif

ifeq ($(OSNAME),Darwin)
# (adjust flags or comment out this section for older versions of XCode or OS X)
# (-mmacosx-versin-min= requires at least that version of SDK be installed)
ifneq ($(ARCH),$(filter $(ARCH),ppc64 ppc64le))
OPTON += -march=native -mmacosx-version-min=10.10
else
OPTON += -mcpu=native
endif
#http://stackoverflow.com/questions/9840207/how-to-use-avx-pclmulqdq-on-mac-os-x-lion/19342603#19342603
CFLAGS += -Wa,-q
endif

endif


## generic gcc CFLAGS. -DTIME must be included.
CFLAGS += -Wall -pedantic $(OPTON) -I $(SRCDIR) -DTIME


##############################################################################
# END CONFIGURATION
##############################################################################


# local directories
PROGDIR = ./pgms
SRCDIR = ./src
TESTDIR = ./testdir
RESULTDIR = ./results
TMPDIR = ./tmp
# other directories
INCLDIR = /usr/include
LIBDIR = /lib
SCRIPTS = unixbench.logo multi.sh tst.sh index.base
SOURCES = arith.c big.c context1.c \
dummy.c execl.c \
fstime.c hanoi.c \
pipe.c spawn.c \
syscall.c looper.c timeit.c time-polling.c \
dhry_1.c dhry_2.c dhry.h whets.c ubgears.c
TESTS = sort.src cctest.c dc.dat large.txt

ifneq (,$(GRAPHIC_TESTS))
GRAPHIC_BINS = $(PROGDIR)/ubgears
else
GRAPHIC_BINS =
endif

# Program binaries.
BINS = $(PROGDIR)/arithoh $(PROGDIR)/register $(PROGDIR)/short \
$(PROGDIR)/int $(PROGDIR)/long $(PROGDIR)/float $(PROGDIR)/double \
$(PROGDIR)/hanoi $(PROGDIR)/syscall $(PROGDIR)/context1 \
$(PROGDIR)/pipe $(PROGDIR)/spawn $(PROGDIR)/execl \
$(PROGDIR)/dhry2 $(PROGDIR)/dhry2reg $(PROGDIR)/looper \
$(PROGDIR)/fstime $(PROGDIR)/whetstone-double $(GRAPHIC_BINS)
## These compile only on some platforms...
# $(PROGDIR)/poll $(PROGDIR)/poll2 $(PROGDIR)/select

# Required non-binary files.
REQD = $(BINS) $(PROGDIR)/unixbench.logo \
$(PROGDIR)/multi.sh $(PROGDIR)/tst.sh $(PROGDIR)/index.base \
$(PROGDIR)/gfx-x11 \
$(TESTDIR)/sort.src $(TESTDIR)/cctest.c $(TESTDIR)/dc.dat \
$(TESTDIR)/large.txt

# ######################### the big ALL ############################
all:
## Ick!!! What is this about??? How about let's not chmod everything bogusly.
# @chmod 744 * $(SRCDIR)/* $(PROGDIR)/* $(TESTDIR)/* $(DOCDIR)/*
$(MAKE) distr
$(MAKE) programs

# ####################### a check for Run ######################
check: $(REQD)
$(MAKE) all
# ##############################################################
# distribute the files out to subdirectories if they are in this one
distr:
@echo "Checking distribution of files"
# scripts
@if test ! -d $(PROGDIR) \
; then \
mkdir $(PROGDIR) \
; mv $(SCRIPTS) $(PROGDIR) \
; else \
echo "$(PROGDIR) exists" \
; fi
# C sources
@if test ! -d $(SRCDIR) \
; then \
mkdir $(SRCDIR) \
; mv $(SOURCES) $(SRCDIR) \
; else \
echo "$(SRCDIR) exists" \
; fi
# test data
@if test ! -d $(TESTDIR) \
; then \
mkdir $(TESTDIR) \
; mv $(TESTS) $(TESTDIR) \
; else \
echo "$(TESTDIR) exists" \
; fi
# temporary work directory
@if test ! -d $(TMPDIR) \
; then \
mkdir $(TMPDIR) \
; else \
echo "$(TMPDIR) exists" \
; fi
# directory for results
@if test ! -d $(RESULTDIR) \
; then \
mkdir $(RESULTDIR) \
; else \
echo "$(RESULTDIR) exists" \
; fi

.PHONY: all check distr programs run clean spotless

programs: $(BINS)

# (use $< to link only the first dependency, instead of $^,
# since the programs matching this pattern have only
# one input file, and others are #include "xxx.c"
# within the first. (not condoning, just documenting))
# (dependencies could be generated by modern compilers,
# but let's not assume modern compilers are present)
$(PROGDIR)/%:
$(CC) -o $@ $(CFLAGS) $< $(LDFLAGS)

# Individual programs
# Sometimes the same source file is compiled in different ways.
# This limits the 'make' patterns that can usefully be applied.

$(PROGDIR)/arithoh: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/arithoh: CFLAGS += -Darithoh
$(PROGDIR)/register: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/register: CFLAGS += -Ddatum='register int'
$(PROGDIR)/short: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/short: CFLAGS += -Ddatum=short
$(PROGDIR)/int: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/int: CFLAGS += -Ddatum=int
$(PROGDIR)/long: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/long: CFLAGS += -Ddatum=long
$(PROGDIR)/float: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/float: CFLAGS += -Ddatum=float
$(PROGDIR)/double: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c
$(PROGDIR)/double: CFLAGS += -Ddatum=double

$(PROGDIR)/poll: $(SRCDIR)/time-polling.c
$(PROGDIR)/poll: CFLAGS += -DUNIXBENCH -DHAS_POLL
$(PROGDIR)/poll2: $(SRCDIR)/time-polling.c
$(PROGDIR)/poll2: CFLAGS += -DUNIXBENCH -DHAS_POLL2
$(PROGDIR)/select: $(SRCDIR)/time-polling.c
$(PROGDIR)/select: CFLAGS += -DUNIXBENCH -DHAS_SELECT

$(PROGDIR)/whetstone-double: $(SRCDIR)/whets.c
$(PROGDIR)/whetstone-double: CFLAGS += -DDP -DGTODay -DUNIXBENCH
$(PROGDIR)/whetstone-double: LDFLAGS += -lm

$(PROGDIR)/pipe: $(SRCDIR)/pipe.c $(SRCDIR)/timeit.c

$(PROGDIR)/execl: $(SRCDIR)/execl.c $(SRCDIR)/big.c

$(PROGDIR)/spawn: $(SRCDIR)/spawn.c $(SRCDIR)/timeit.c

$(PROGDIR)/hanoi: $(SRCDIR)/hanoi.c $(SRCDIR)/timeit.c

$(PROGDIR)/fstime: $(SRCDIR)/fstime.c

$(PROGDIR)/syscall: $(SRCDIR)/syscall.c $(SRCDIR)/timeit.c

$(PROGDIR)/context1: $(SRCDIR)/context1.c $(SRCDIR)/timeit.c

$(PROGDIR)/looper: $(SRCDIR)/looper.c $(SRCDIR)/timeit.c

$(PROGDIR)/ubgears: $(SRCDIR)/ubgears.c
$(PROGDIR)/ubgears: LDFLAGS += -lm $(GL_LIBS)

$(PROGDIR)/dhry2: CFLAGS += -DHZ=${HZ}
$(PROGDIR)/dhry2: $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c \
$(SRCDIR)/dhry.h $(SRCDIR)/timeit.c
$(CC) -o $@ ${CFLAGS} $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c

$(PROGDIR)/dhry2reg: CFLAGS += -DHZ=${HZ} -DREG=register
$(PROGDIR)/dhry2reg: $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c \
$(SRCDIR)/dhry.h $(SRCDIR)/timeit.c
$(CC) -o $@ ${CFLAGS} $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c

# Run the benchmarks and create the reports
run:
sh ./Run

clean:
$(RM) $(BINS) core *~ */*~

spotless: clean
$(RM) $(RESULTDIR)/* $(TMPDIR)/*

## END ##
Loading

0 comments on commit ab6d211

Please sign in to comment.