GitHub Copilot has recently resumed working with code that contains hardcoded banned words, leading to discussions about censorship and the implications of AI response patterns. Users are experiencing mixed reliability with the tool, raising concerns about biases in training AI versus traditional censorship practices. A user suggests the possibility of circumventing AI filtering by embedding banned terms in subtle ways, while others question the logic behind censoring AI input separately from human communication. The discourse reflects broader societal debates about language restrictions and technology's role in moderating content.