{ Optimizing C63 for x86 Group 9
Bird’s-eye view: gprof of reference encoder Optimizing SAD Results Outline
Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls us/call us/call name sad_block_8x8 me_block_8x8 dequant_idct_block_8x8 dct_quant_block_8x8 flush_bits put_bits granularity: each sample hit covers 2 byte(s) for 0.02% of seconds gprof: reference, foreman
Optimizing SAD
SAD SSE2 PSAD void sad_block_8x8(uint8_t *block1, uint8_t *block2, int stride, int *result) { int v; __m128i r = _mm_setzero_si128(); for (v = 0; v < 8; v += 2) { const __m128i b1 = _mm_set_epi64(*(__m64 *) &block1[(v+0)*stride], *(__m64 *) &block1[(v+1)*stride]); const __m128i b2 = _mm_set_epi64(*(__m64 *) &block2[(v+0)*stride], *(__m64 *) &block2[(v+1)*stride]); r = _mm_add_epi16(r, _mm_sad_epu8(b2, b1)); } *result = _mm_extract_epi16(r, 0) + _mm_extract_epi16(r, 4);  }
Gprof: Improved SSE2 SAD uses 3.69 s vs s* Cachegrind: Lots of branch prediction misses in me_block_8x8 *) Foreman sequence on gpu-7 How well does this perform?
SAD SSE4.1 MPSADBW+PHMINSUM void sad_block_8x8x8(uint8_t *block1, uint8_t *block2, int stride, int *best, int *result) { int v; __m128i r = _mm_setzero_si128(); union { __m128i v; struct { uint16_t sad; unsigned int index : 3; } minpos; } mp; for (v = 0; v < 8; v += 2) { const __m128i b1 = _mm_set_epi64(*(__m64 *) &block1[(v+1)*stride], *(__m64 *) &block1[(v+0)*stride]); const __m128i b2 = _mm_loadu_si128((__m128i *) &block2[(v+0)*stride]); const __m128i b3 = _mm_loadu_si128((__m128i *) &block2[(v+1)*stride]); r = _mm_add_epi16(r, _mm_mpsadbw_epu8(b2, b1, 0b000)); r = _mm_add_epi16(r, _mm_mpsadbw_epu8(b2, b1, 0b101)); r = _mm_add_epi16(r, _mm_mpsadbw_epu8(b3, b1, 0b010)); r = _mm_add_epi16(r, _mm_mpsadbw_epu8(b3, b1, 0b111)); } mp.v = _mm_minpos_epu16(r); *result = mp.minpos.sad; *best = mp.minpos.index; }
Gprof: Improved SSE4.1 SAD uses 0.90s vs 3.69s Cachegrind: Branch prediction misses reduced by a factor of 8 Intel’s IACA tool: CPU pipeline appears to be filled! Appears both source and reference block loads compete with (V)MPSADBW for the CPUs execution ports Assembly: Better utilize AVX’s non-destructive instructions (less register copies) Better utilize loaded data for SAD computations with same source block How well does this perform?
SAD 8x8x8x8: Less src loads, branches Load source block once:.macro.load_src vmovq (%rdi), %xmm0# src[0] vpinsrq$1,(%rdi,%rdx),%xmm0,%xmm0# src[1] vmovq (%rdi,%rdx,2),%xmm1# src[2] vmovhps(%rdi,%r8),%xmm1,%xmm1# src[3] vmovq (%rdi,%rdx,4),%xmm2# src[4] vmovhps(%rdi,%r9),%xmm2,%xmm2# src[5] vmovq (%rdi,%r8,2),%xmm3# src[6] vmovhps(%rdi,%rax),%xmm3,%xmm3# src[7] Do sad for 8x8 - 8x8 blocks (relative y = 0…8, x = 0…8):.macro.8x8x8x8 vmovdqu (%rsi), %xmm12# ref[0] .8x8x1 0, 0, 12, 4 0 vmovdqu(%rsi,%rdx), %xmm13# ref[1] .8x8x1 0, 1, 13, 4 0 .8x8x1 1, 0, 13, 5 0
Gprof: Improved SAD 8x8x8x8 uses 0.53s vs 0.90s Valgrind: Even less branching How well does this perform?
SAD 4x8x8x8x8: Branchless UV-plane ME sad_4x8x8x8x8:.load_src # Load source block from %rdi mov %rsi, %rdi# Reference block x,y .8x8x8x8 lea 8(%rdi), %rsi# Reference block x+8, y .8x8x8x8 lea (%rdi,%rdx,8), %rsi .8x8x8x8 lea 8(%rdi,%rdx,8), %rsi # .8x8x8x8 … vphminposuw … ret
Gprof: Improved 4x8x8x8x8 SAD uses 0.47s vs 0.53s Valgrind: Even less branching! Total runtime of Foreman sequence reduced from ~40.1s to ~1.6s (factor of 25) How well does this perform?
InstructionFunction variantCycle mean, adjusted for single 8x8 block MPSADBWSAD 8x 8x87.5 VMPSADBWSAD 2x8x 8x83.6 SAD 4x8x 8x84.3 SAD 8x8x 8x8 v12.9 SAD 8x8x 8x8 v22.8 SAD 4x 8x8x 8x82.7 Future researchSAD 16x8x8x8x82.6? Even less branches? SAD assembly code: Iterative improvement
Image quality: comparable PSNR Mean PSNR 95% SSIM Mean SSIM 95% Tractor reference Tractor Foreman reference Foreman
NOT ON NDLAB! Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name sad_block_8x8 c63_motion_estimate dct_quant_block_8x8 dequant_idct_block_8x8 write_block dequantize_idct c63_motion_compensate dct_quantize put_bits granularity: each sample hit covers 2 byte(s) for 0.01% of seconds gprof: reference, tractor 50 frames (-O3)
NOT ON NDLAB! Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name sad_4x8x8x8x8 dct_quant_block_8x8 dequant_idct_block_8x8 write_frame dequantize_idct put_bits dct_quantize transpose_block_avx sad_8x8x8x8 c63_motion_estimate c63_motion_compensate granularity: each sample hit covers 2 byte(s) for 0.17% of 5.86 seconds 134.2/5.86 = ~24 gprof: improved, tractor 50 frames (-O3)