The emerging audio formats for 3D rendering use several rendering methods and are dedicated for playback on various loudspeaker layouts defined by the ITU BS.2051. The aim of this work is to investigate the perceptual properties when rendering point sources, with regard to width, smoothness of moving sources, change in timbre, and localisation. The question is how to mathematically find differences within several rendering systems and to prove with listening experiments whether these are audible by human. The material for the listening experiments should consist of ecologically valid and well-comparable spatial audio scenes. The listening experiments of this work focus on two exemplary layouts that are likely to become standard in broadcasting and storage. Also further potential optimization of the AllRAD approach and other novel rendering functionalities are investigated in this work.