Sample and quantize the signals y1= sin(2000πt) + cos(2000πt) with Ts = 0.1 ms and for 0 ≤ t ≤ 2 ms.Where Ts is the sampling interval and the ADC has 8, 16, and 32 uniform quantization levels. I can't figure out how to go about quantizing the signal in Matlab.I can create the signal but I.
Scalar quantization is a process that maps all inputs within a specified range to a common value. This process maps inputs in a different range of values to a different common value. In effect, scalar quantization digitizes an analog signal. Two parameters determine a quantization: a partition and a codebook.
A quantization partition defines several contiguous, nonoverlapping ranges of values within the set of real numbers. To specify a partition in the MATLAB® environment, list the distinct endpoints of the different ranges in a vector.
For example, if the partition separates the real number line into the four sets
then you can represent the partition as the three-element vector
The length of the partition vector is one less than the number of partition intervals.
A codebook tells the quantizer which common value to assign to inputs that fall into each range of the partition. Represent a codebook as a vector whose length is the same as the number of partition intervals. For example, the vector
is one possible codebook for the partition [0,1,3]
.
The quantiz
function also returns a vector that tells which interval each input is in. For example, the output below says that the input entries lie within the intervals labeled 0, 6, and 5, respectively. Here, the 0th interval consists of real numbers less than or equal to 3; the 6th interval consists of real numbers greater than 8 but less than or equal to 9; and the 5th interval consists of real numbers greater than 7 but less than or equal to 8.
The output is
If you continue this example by defining a codebook vector such as
then the equation below relates the vector index
to the quantized signal quants
.
This formula for quants
is exactly what the quantiz
function uses if you instead phrase the example more concisely as below.
Quantization distorts a signal. You can reduce distortion by choosing appropriate partition and codebook parameters. However, testing and selecting parameters for large signal sets with a fine quantization scheme can be tedious. One way to produce partition and codebook parameters easily is to optimize them according to a set of so-called training data.
The training data you use should be typical of the kinds of signals you will actually be quantizing.
The lloyds
function optimizes the partition and codebook according to the Lloyd algorithm. The code below optimizes the partition and codebook for one period of a sinusoidal signal, starting from a rough initial guess. Then it uses these parameters to quantize the original signal using the initial guess parameters as well as the optimized parameters. The output shows that the mean square distortion after quantizing is much less for the optimized parameters. The quantiz
function automatically computes the mean square distortion and returns it as the third output parameter.
The output is
The quantization in the section Quantize a Signal requires no a priori knowledge about the transmitted signal. In practice, you can often make educated guesses about the present signal based on past signal transmissions. Using such educated guesses to help quantize a signal is known as predictive quantization. The most common predictive quantization method is differential pulse code modulation (DPCM).
The functions dpcmenco
, dpcmdeco
, and dpcmopt
can help you implement a DPCM predictive quantizer with a linear predictor.
To determine an encoder for such a quantizer, you must supply not only a partition and codebook as described in Represent Partitions and Represent Codebooks, but also a predictor. The predictor is a function that the DPCM encoder uses to produce the educated guess at each step. A linear predictor has the form
I have the DVDs but not the license.I have JD SA 2.8.305 and 5 dvd set`s from july 2010 and it`s working. John deere service advisor keygen generator. (, 12:14 AM)djovic Wrote: (, 04:38 AM)kamiar Wrote: Anyone has the workaround for Service Advisor 2011 V.4?
where x is the original signal, y(k)
attempts to predict the value of x(k)
, and p
is an m
-tuple of real numbers. Instead of quantizing x
itself, the DPCM encoder quantizes the predictive error, x-y. The integer m
above is called the predictive order. The special case when m = 1
is called delta modulation.
If the guess for the k
th value of the signal x
, based on earlier values of x
, is
then the corresponding predictor vector for toolbox functions is
The initial zero in the predictor vector makes sense if you view the vector as the polynomial transfer function of a finite impulse response (FIR) filter.
A simple special case of DPCM quantizes the difference between the signal's current value and its value at the previous step. Thus the predictor is just y(k) = x (k - 1)
. The code below implements this scheme. It encodes a sawtooth signal, decodes it, and plots both the original and decoded signals. The solid line is the original signal, while the dashed line is the recovered signals. The example also computes the mean square error between the original and decoded signals.
The output is
The section Optimize Quantization Parameters describes how to use training data with the lloyds
function to help find quantization parameters that will minimize signal distortion.
This section describes similar procedures for using the dpcmopt
function in conjunction with the two functions dpcmenco
and dpcmdeco
, which first appear in the previous section.
The training data you use with dpcmopt
should be typical of the kinds of signals you will actually be quantizing with dpcmenco
.
This example is similar to the one in the last section. However, where the last example created predictor
, partition
, and codebook
in a straightforward but haphazard way, this example uses the same codebook (now called initcodebook
) as an initial guess for a new optimized codebook parameter. This example also uses the predictive order, 1, as the desired order of the new optimized predictor. The dpcmopt
function creates these optimized parameters, using the sawtooth signal x
as training data. The example goes on to quantize the training data itself; in theory, the optimized parameters are suitable for quantizing other data that is similar to x
. Notice that the mean square distortion here is much less than the distortion in the previous example.
The output is
In certain applications, such as speech processing, it is common to use a logarithm computation, called a compressor, before quantizing. The inverse operation of a compressor is called an expander. The combination of a compressor and expander is called a compander.
The compand
function supports two kinds of companders: µ-law and A-law companders. Its reference page lists both compressor laws.
The code below quantizes an exponential signal in two ways and compares the resulting mean square distortions. First, it uses the quantiz
function with a partition consisting of length-one intervals. In the second trial, compand
implements a µ-law compressor, quantiz
quantizes the compressed data, and compand
expands the quantized data. The output shows that the distortion is smaller for the second scheme. This is because equal-length intervals are well suited to the logarithm of sig
, but not well suited to sig
. The figure shows how the compander changes sig
.
The output and figure are below.
Huffman coding offers a way to compress data. The average length of a Huffman code depends on the statistical frequency with which the source produces each symbol from its alphabet. A Huffman code dictionary, which associates each data symbol with a codeword, has the property that no codeword in the dictionary is a prefix of any other codeword in the dictionary.
The huffmandict
, huffmanenco
, and huffmandeco
functions support Huffman coding and decoding.
For long sequences from sources having skewed distributions and small alphabets, arithmetic coding compresses better than Huffman coding. To learn how to use arithmetic coding, see Arithmetic Coding.
Huffman coding requires statistical information about the source of the data being encoded. In particular, the p
input argument in the huffmandict
function lists the probability with which the source produces each symbol in its alphabet.
For example, consider a data source that produces 1s with probability 0.1, 2s with probability 0.1, and 3s with probability 0.8. The main computational step in encoding data from this source using a Huffman code is to create a dictionary that associates each data symbol with a codeword. The commands below create such a dictionary and then show the codeword vector associated with a particular value from the data source.
The output below shows that the most probable data symbol, 3, is associated with a one-digit codeword, while less probable data symbols are associated with two-digit codewords. The output also shows, for example, that a Huffman encoder receiving the data symbol 1 should substitute the sequence 11.
The example below performs Huffman encoding and decoding, using a source whose alphabet has three symbols. Notice that the huffmanenco
and huffmandeco
functions use the dictionary that huffmandict
created.
Arithmetic coding offers a way to compress data and can be useful for data sources having a small alphabet. The length of an arithmetic code, instead of being fixed relative to the number of symbols being encoded, depends on the statistical frequency with which the source produces each symbol from its alphabet. For long sequences from sources having skewed distributions and small alphabets, arithmetic coding compresses better than Huffman coding.
The arithenco
and arithdeco
functions support arithmetic coding and decoding.
Arithmetic coding requires statistical information about the source of the data being encoded. In particular, the counts
input argument in the arithenco
and arithdeco
functions lists the frequency with which the source produces each symbol in its alphabet. You can determine the frequencies by studying a set of test data from the source. The set of test data can have any size you choose, as long as each symbol in the alphabet has a nonzero frequency.
For example, before encoding data from a source that produces 10 x's, 10 y's, and 80 z's in a typical 100-symbol set of test data, define
Alternatively, if a larger set of test data from the source contains 22 x's, 23 y's, and 185 z's, then define
The example below performs arithmetic encoding and decoding, using a source whose alphabet has three symbols.
The code below shows how the quantiz
function uses partition
and codebook
to map a real vector, samp
, to a new vector, quantized
, whose entries are either -1, 0.5, 2, or 3.
The output is below.
This example illustrates the nature of scalar quantization more clearly. After quantizing a sampled sine wave, it plots the original and quantized signals. The plot contrasts the x
's that make up the sine curve with the dots that make up the quantized signal. The vertical coordinate of each dot is a value in the vector codebook
.