Skip to content Skip to sidebar Skip to footer

Subprocess Command Encoding

I'm currently migration a script from Perl to Python3 (3.6.5). Is is running on Windows Server 2016. The Script builds a command line with arguments and executes the created string

Solution 1:

In Python 3.3+ you can separately indicate that you expect text in a particular encoding. The keyword argument universal_newlines=True was renamed in 3.7 to the more accurate and transparent text=True.

This keyword basically says "just use whatever encoding is default on my system" (so basically UTF-8 on anything reasonably modern except on Windows, where you get some Cthulhu atrocity from the abyss the system's default code page).

In the absence of this keyword, subprocesses receive and return bytes in Python 3.

Of course, if you know the encoding, you can also separately .decode() the bytes you get back.

If you know the encoding it's probably useful to use the encoding= keyword argument (even if you assume it is also the system encoding; this was added in Python 3.6).

response = subprocess.check_output([...], text=True)
response = subprocess.check_output([...], encoding='utf-8')
response = subprocess.check_output([...]).decode('utf-8')

Solution 2:

The trick to get the script running, is to encode the arguments to 'utf8' and then to decode them to 'ansi'.

command = r'C:\PROGRAM FILES\Application\bin\cfg.exe'
argument = ["-modify", "-location:123á456ß99"]

argument_ansi = []
for x in argument:
    argument_ansi.append(x.encode('utf-8').decode('ansi', 'replace'))
cmd = [command]
cmd.extend(argument_ansi)
result = subprocess.check_output(cmd, shell=False, encoding="utf-8", universal_newlines=True)

Post a Comment for "Subprocess Command Encoding"