I'm learning about binary representation of integers and tried to write a function that returns an `int`

multiplied by 2 using saturation. The thought process is if the value overflows positively the function returns `INT_MAX`

, and conversely if it overflows negatively it returns `INT_MIN`

. In all other cases the binary value is left shifted by 1.

What I'm wondering is why I have to cast the value `0xC0000000`

as an `int`

in order to get my function to work correctly when I pass the argument `x = 1`

.

Here is my function:

```
int timestwo (int x){
if(x >= 0x40000000) // INT_MAX/2 + 1
return 0x7fffffff; // INT_MAX
else if(x < (int) 0xC0000000) // INT_MIN/2
return 0x80000000; // INT_MIN
else
return x << 1;
return 0;
}
```

`int`

,`return x << 1;`

results in undefined behavior if`x <= ( int ) 0x80000000`

and`x >= ( int ) 0xC0000000`

.`INT_MAX`

and`INT_MIN`

? And worse, if you know there are such constants, why don't you use them? The constants are there to avoid architecture dependencies, that are commonly solved by comparing values of the same type.