- Encoding of Image Data
 
- Decoding of Image Data
  Before describing the algorithm that encodes the data, it is essential to note the variables and notations used throughout the algorithm:
  -Encoding of Data:
It should be noted that this algorithm modifies Pradhan's lossless coding of two highly correlated sources, whose correlation are determined by their Hamming distance, to lossless coding of two highly correlated image sources, whose correlation are determined by the actual gray level difference values of their same location pixels. The more important difference; however, which is an extension of Pradhan's lossless coding scheme, is that in this work, each coset contains information regarding multiple pixels with the use of a new coset placement approach.
[2], [3] Hence, the image X is encoded in three stages:
(i) Grouping pixels in cosets: The first step is to place the first pixel of each group of N pixels into cosets. Each group (not the coset) is determined by the combination of successive N pixels, and the groups are non-overlapping. The coset placement is done by taking the mode of each pixel present in the image, where the mode is taken in base D, the maximum gray level difference imposed on the individual same location pixels of the two images, X and Y. It should be observed that coset placement implemented by taking the mode of the values automatically places the pixel values with the greatest distance into the same coset, which is the goal for the coding of highly correlated images. It should also be noted that this is different from Pradhan's approach of placing data values into cosets, which is done by using the Hamming distance between the data values. This difference is due to that Pradhan uses data chunks with their high correlation defined by the proximity of their Hamming distances. However, in the case of still images, the actual gray level value proximity of the same location pixels of the two images needs to be considered. Note that this step involves exploiting the inter-pixel redundancy.
(ii) Grouping the Pixel Value Differences in Sub-Cosets:
The coset placement of the
individual pixels is followed by the sub-coset placement of the
pixel differences. In each N pixel group, first, the gray level
differences between successive pixels are found.
Then, these difference values are grouped into sub-cosets by finding the
mode based D of the difference values. Further, each difference value is
incremented by the total range of pixel gray level values in order not to
lose information about the negative differences.
Hence, in each N pixel group,one pixel
coset value (found in part (i)), and N-1 pixel sub- coset values
(found in this part) are transmitted. The purpose of transmitting
difference sub-coset values instead of individual pixel coset values is
to minimize the number of changes in the higher bit planes (of bit-plane
encoding), and hence, to achieve a lower bit rate through
the bit-plane encoding in part (iv). Hence, this step exploits the
intra-pixel redundancy of the image.
(iii) Coding the coset values with the Gray code: The next step is to code the coset values (not the individual pixel values) with the gray code. The purpose of this step is to use a code that will utilize the benefits of bit-plane encoding to be done in part (iv).
(iv) Bit-plane encoding: In a transmission with gray level bound D, there are log2(D) number of bit-planes, and the gray coded cosets (1 for the starting pixel, N-1 for the differences in each N pixel group, with a total of (512*256)/N groups) are coded in the log2(D) level bit-planes.
  -Decoding of Data:
For each N-pixel group, there is 1 received coset at the decoder. This coset includes one pixel coset value (that of the initial pixel of the N pixel group) and N-1 sub-coset values for the differences. First, using the initial pixel's coset value and the sub-coset values transmitted, all the possible N pixel data sequence values are found. Hence, this step does the reverse of the mode taking step. Then, for each of the N pixels, the candidate value closest to the value of Y (for that pixel location) is selected as the value of X.
As an example, assume that for N=2, and D=32, the decoder receives the following sequence (after conversion from gray code to binary code):
1111000001
This code, containing the information of 2 consecutive pixels, convey that:
(<(Pixel 1 -Pixel 2+256)>mod 32)=1
Now assume that the first pixel is decoded as 62 by the decoder, being
closest to Y.
Then, the second pixel is one of 29,61,93,...,221,253.
This is extended to larger N and a variety of D values to get the Results for evaluating this algorithm for the original and interpolated image pairs in the data set.
ABSTRACT
INTRODUCTION
PROBLEM DESCRIPTION and PRIOR WORK
DATA SET
ALGORITHM
RESULTS
CONCLUSIONS
REFERENCES