Technological advancements in AI are generating major breakthroughs, but they also have significant social conse- quences that require regulation. The proposed European re- gulation on AI, also known as the AI Act, will impose a number of obligations on providers of high-risk AI systems, that can be considered as “ethical” obligations, including respect for fairness and fundamental rights. To ensure com- pliance of these systems with ethical requirements, the pro- posed AI Act plans to use harmonized standards, raising the question of compatibility between technical standards and ethical issues. In this paper, we contribute to this debate by recalling the role of standards and certification in Europe, before presenting the actors currently working on “ethical” AI standards. We show through this inventory the diversity of their work and the competition that is emerging between different visions of AI ethics. Finally, we discuss the risks raised by these standards, such as the difficulty of defining objective criteria and the possibility that citizens may be misled.