Is the following test case valid for gcc?
typedef signed char v16signed_char __attribute__ ((__vector_size__ (sizeof(signed char) * 16)));
v16signed_char and_imm(v16signed_char a)
{
v16signed_char s = a & 0xFF;
return s;
}
It fails on gcc, apparently because 0xFF is too large. Should it fail? By failure I mean compiler error:
error: conversion of scalar ‘int’ to vector ‘v4unsigned_int {aka __vector(4) signed char}’ involves truncation