Arama Sonuçları

Listeleniyor 1 - 4 / 4
  • Yayın
    A novel image compression method based on classified energy and pattern building blocks
    (Springer International Publishing AG, 2011) Güz, Ümit
    In this paper, a novel image compression method based on generation of the so-called classified energy and pattern blocks (CEPB) is introduced and evaluation results are presented. The CEPB is constructed using the training images and then located at both the transmitter and receiver sides of the communication system. Then the energy and pattern blocks of input images to be reconstructed are determined by the same way in the construction of the CEPB. This process is also associated with a matching procedure to determine the index numbers of the classified energy and pattern blocks in the CEPB which best represents (matches) the energy and pattern blocks of the input images. Encoding parameters are block scaling coefficient and index numbers of energy and pattern blocks determined for each block of the input images. These parameters are sent from the transmitter part to the receiver part and the classified energy and pattern blocks associated with the index numbers are pulled from the CEPB. Then the input image is reconstructed block by block in the receiver part using a mathematical model that is proposed. Evaluation results show that the method provides considerable image compression ratios and image quality even at low bit rates.
  • Yayın
    Compression of the biomedical images using quadtree-based partitioned universally classified energy and pattern blocks
    (Springer London, 2019-03-15) Gezer, Murat; Gargari, Sepideh Nahavandi; Güz, Ümit; Gürkan, Hakan
    In this work, an efficient low bit rate image coding/compression method based on the quadtree-based partitioned universally classified energy and pattern building blocks (QB-UCEPB) is introduced. The proposed method combines low bit rate robustness and variable-sized quantization benefits of the well-known classified energy and pattern blocks (CEPB) method and quadtree-based (QB) partitioning technique, respectively. In the new method, first, the QB-UCEPB is constructed in the form of variable length block size thanks to the quadtree-based partitioning rather than fixed block size partitioning which was employed in the conventional CEPB method. The QB-UCEPB is then placed to the transmitter side as well as receiver side of the communication channel as a universal codebook manner. Every quadtree-based partitioned block of the input image is encoded using three quantities: image block scaling coefficient, the index number of the QB-UCEB and the index number of the QB-UCPB. These quantities are sent from the transmitter part to the receiver part through the communication channel. Then, the quadtree-based partitioned input image blocks are reconstructed in the receiver part using a decoding algorithm, which exploits the mathematical model that is proposed. Experimental results show that using the new method, the computational complexity of the classical CEPB is substantially reduced. Furthermore, higher compression ratios, PSNR and SSIM levels are achieved even at low bit rates compared to the classical CEPB and conventional methods such as SPIHT, EZW and JPEG2000
  • Yayın
    Significance map pruning and other enhancements to SPIHT image coding algorithm
    (Elsevier Science, 2003-10) Bayazıt, Uluğ
    This paper proposes several enhancements to the Set Partitioning in Hierarchical Trees (SPIHT) image coding algorithm without changing the original algorithm's general skeleton. First and foremost, a method for significance map pruning based on a rate-distortion criterion is introduced. Specifically, the (Type A) sets of wavelet coefficients with small ratios of estimated distortion reduction to estimated rate contribution are deemed insignificant and effectively pruned. Even though determining such sets requires the computational complexity of the encoder to increase considerably with respect to the original SPIHT encoder, the original SPIHT decoder may still be used to decode the generated bitstream with a low computational complexity. The paper also proposes three low complexity enhancements by more sophisticated use of the adaptive arithmetic coder. Simulation results demonstrate that all these enhancements yield modest compression gains at moderate to high rates.
  • Yayın
    Postprocessing of decoded color images by adaptive linear filtering
    (Elsevier Science, 2003-02) Bayazıt, Uluğ
    This paper presents an image adaptive linear filtering method for the reconstruction of the RGB (red, blue, green) color coordinates of a pixel from the lossy compressed luminance/chrominance color coordinates. In the absence of quantization noise, the RGB coordinates of a pixel can be perfectly reconstructed by employing a standard, fixed filter whose support includes only the luminance/chrominance coordinates at the spatial location of the pixel. However, in the presence of quantization noise, a filter with a larger support, that also spatially extends over the luminance/chrominance coordinate planes, is capable of exploiting the statistical dependence among the luminance/chrominance coordinate planes, and thereby yields more accurate reconstruction than the standard, fixed filter. We propose the optimal (in the minimum mean squared error sense) determination of the coefficients of this adaptive linear filter at the image encoder by solving a system of regression equations. When transmitted as side information to the image decoder, the filter coefficients need not incur significant overhead if they are quantized and compressed intelligently. Our simulation results demonstrate that the distortion of the decompressed color coordinate planes can be reduced by several tenths of a dB with negligible overhead rate by the application of our image adaptive linear filtering method.