Utf 8 to hex javascript

Javascript: Unicode string to hex

I’m trying to convert a unicode string to a hexadecimal representation in javascript. This is what I have:

function convertFromHex(hex) < var hex = hex.toString();//force conversion var str = ''; for (var i = 0; i < hex.length; i += 2) str += String.fromCharCode(parseInt(hex.substr(i, 2), 16)); return str; >function convertToHex(str) < var hex = ''; for(var i=0;ireturn hex; > 

But if fails on unicode characters, like chinese; Input: 漢字 Output: ªo»[W Any ideas? Can this be done in javascript?

7 Answers 7

Remember that a JavaScript code unit is 16 bits wide. Therefore the hex string form will be 4 digits per code unit. usage:

var str = "\u6f22\u5b57"; // "\u6f22\u5b57" === "漢字" alert(str.hexEncode().hexDecode()); 
String.prototype.hexEncode = function() < var hex, i; var result = ""; for (i=0; ireturn result > 
String.prototype.hexDecode = function()< var j; var hexes = this.match(/./g) || []; var back = ""; for(j = 0; j return back; > 

Thanks, just 1 question though (may be a dumb one..) — how do you get \u6f22\u5b57 from 漢字 in javascript? Closest is with the escape() function but this uses % — I guess a regex of sorts could be used to replace % with / — but the escape() function is also deprecated. EncodeURI and encodeURIComponent both give a different output. Any idea? – user429620 Feb 8 ’14 at 16:17

«\u6f22\u5b57» is the Unicode escape form of the literal «漢字» in the same way that \n is the newline character. I tend to use them to avoid ambiguity and avoid character encoding issues. See the specification for details. To generate them yourself change the above («000″+hex).slice(-4) to «\\u» + («000″+hex).slice(-4) . The expression «\u6f22\u5b57» === «漢字» evaluates to true because after code parsing they are the same. – McDowell Feb 8 ’14 at 16:28

Thanks, 1 issue I’m running into, sometimes hex.match(//.<1,4>/g); does not match anything. (error: null is not an object (evaluating hexes.length)) — do you know what could be the cause? – user429620 Feb 9 ’14 at 19:13

If you were using the top algorithm as written «test» encodes to «0074006500730074» . There is no ASCII. JavaScript strings are always UTF-16. – McDowell Feb 9 ’14 at 19:34

I fixed the hexDecode function since it didn’t seem to work; var a = «\\x73\\x75\\x62\\x73\\x74\\x72»; var str = «\\u6f22\\u5b57»; String.prototype.hexDecode = function() < var j; var hexes = this.split("\\"); var back = ""; for(j = 1; jreturn back; >; a.hexDecode(); //»substr» str.hexDecode(); //»漢字» this also works for Hexadecimal escape sequences – Y. Yoshii Feb 14 ’18 at 4:26

Here is a tweak of McDowell’s algorithm that doesn’t pad the result:

 function toHex(str) < var result = ''; for (var i=0; ireturn result; > 

Not sure what I’m looking at, but this is useful for me to get user’s private CouchDB database! Thanks – kyw Feb 25 ’19 at 2:16

Kudos @redgeoff! this solution works when passing the string into PHP and decoding with hex2bin() . – JMerinoH Jun 25 ’20 at 3:36

Читайте также:  Java sql server подключение

It depends on what encoding you use. If you want to convert utf-8 encoded hex to string, use this:

function fromHex(hex,str) < try< str = decodeURIComponent(hex.replace(/(..)/g,'%$1')) >catch(e) < str = hex console.log('invalid hex input: ' + hex) >return str > 

For the other direction use this:

function toHex(str,hex)< try< hex = unescape(encodeURIComponent(str)) .split('').map(function(v)< return v.charCodeAt(0).toString(16) >).join('') > catch(e) < hex = str console.log('invalid text input: ' + str) >return hex > 

for the toHex function, if hex < 10, it needs '0' padding.. if \n or \t appears in the text, it would appear as '9' or 'a'.. but it should be '09' and '0a' respectively. – Munawwar Mar 16 '20 at 20:53

you can change it to return v.charCodeAt(0).toString(16).padStart(2, ‘0’) – Munawwar Mar 25 ’20 at 10:56

how do you get «\u6f22\u5b57» from 漢字 in JavaScript?

These are JavaScript Unicode escape sequences e.g. \u12AB . To convert them, you could iterate over every code unit in the string, call .toString(16) on it, and go from there.

However, it is more efficient to also use hexadecimal escape sequences e.g. \xAA in the output wherever possible.

Also note that ASCII symbols such as A , b , and — probably don’t need to be escaped.

I’ve written a small JavaScript library that does all this for you, called jsesc . It has lots of options to control the output.

Your question was tagged as utf-8 . Reading the rest of your question, UTF-8 encoding/decoding didn’t seem to be what you wanted here, but in case you ever need it: use utf8.js (online demo).

A more up to date solution, for encoding:

// This is the same for all of the below, and // you probably won't need it except for debugging // in most cases. function bytesToHex(bytes) < return Array.from( bytes, byte =>byte.toString(16).padStart(2, "0") ).join(""); > // You almost certainly want UTF-8, which is // now natively supported: function stringToUTF8Bytes(string) < return new TextEncoder().encode(string); >// But you might want UTF-16 for some reason. // .charCodeAt(index) will return the underlying // UTF-16 code-units (not code-points!), so you // just need to format them in whichever endian order you want. function stringToUTF16Bytes(string, littleEndian) < const bytes = new Uint8Array(string.length * 2); // Using DataView is the only way to get a specific // endianness. const view = new DataView(bytes.buffer); for (let i = 0; i != string.length; i++) < view.setUint16(i, string.charCodeAt(i), littleEndian); >return bytes; > // And you might want UTF-32 in even weirder cases. // Fortunately, iterating a string gives the code // points, which are identical to the UTF-32 encoding, // though you still have the endianess issue. function stringToUTF32Bytes(string, littleEndian) < const codepoints = Array.from(string, c =>c.codePointAt(0)); const bytes = new Uint8Array(codepoints.length * 4); // Using DataView is the only way to get a specific // endianness. const view = new DataView(bytes.buffer); for (let i = 0; i != codepoints.length; i++) < view.setUint32(i, codepoints[i], littleEndian); >return bytes; > 
bytesToHex(stringToUTF8Bytes("hello 漢字 👍")) // "68656c6c6f20e6bca2e5ad9720f09f918d" bytesToHex(stringToUTF16Bytes("hello 漢字 👍", false)) // "00680065006c006c006f00206f225b570020d83ddc4d" bytesToHex(stringToUTF16Bytes("hello 漢字 👍", true)) // "680065006c006c006f002000226f575b20003dd84ddc" bytesToHex(stringToUTF32Bytes("hello 漢字 👍", false)) // "00000068000000650000006c0000006c0000006f0000002000006f2200005b57000000200001f44d" bytesToHex(stringToUTF32Bytes("hello 漢字 👍", true)) // "68000000650000006c0000006c0000006f00000020000000226f0000575b0000200000004df40100" 

For decoding, it’s generally a lot simpler, you just need:

function hexToBytes(hex) < const bytes = new Uint8Array(hex.length / 2); for (let i = 0; i !== bytes.length; i++) < bytes[i] = parseInt(hex.substr(i * 2, 2), 16); >return bytes; > 

then use the encoding parameter of TextDecoder :

// UTF-8 is default new TextDecoder().decode(hexToBytes("68656c6c6f20e6bca2e5ad9720f09f918d")); // but you can also use: new TextDecoder("UTF-16LE").decode(hexToBytes("680065006c006c006f002000226f575b20003dd84ddc")) new TextDecoder("UTF-16BE").decode(hexToBytes("00680065006c006c006f00206f225b570020d83ddc4d")); // "hello 漢字 👍" 

You might notice UTF-32 is not on that list, which is a pain, so:

function bytesToStringUTF32(bytes, littleEndian) < const view = new DataView(bytes.buffer); const codepoints = new Uint32Array(view.byteLength / 4); for (let i = 0; i !== codepoints.length; i++) < codepoints[i] = view.getUint32(i * 4, littleEndian); >return String.fromCodePoint(. codepoints); > 
bytesToStringUTF32(hexToBytes("00000068000000650000006c0000006c0000006f0000002000006f2200005b57000000200001f44d"), false) bytesToStringUTF32(hexToBytes("68000000650000006c0000006c0000006f00000020000000226f0000575b0000200000004df40100"), true) // "hello 漢字 👍" 

Источник

Читайте также:  Java swing layouts примеры

[javascript] Javascript: Unicode string to hex

I’m trying to convert a unicode string to a hexadecimal representation in javascript.

function convertFromHex(hex) < var hex = hex.toString();//force conversion var str = ''; for (var i = 0; i < hex.length; i += 2) str += String.fromCharCode(parseInt(hex.substr(i, 2), 16)); return str; >function convertToHex(str) < var hex = ''; for(var i=0;ireturn hex; > 

But if fails on unicode characters, like chinese;

Any ideas? Can this be done in javascript?

The answer is

Remember that a JavaScript code unit is 16 bits wide. Therefore the hex string form will be 4 digits per code unit.

var str = "\u6f22\u5b57"; // "\u6f22\u5b57" === "??" alert(str.hexEncode().hexDecode()); 
String.prototype.hexEncode = function() < var hex, i; var result = ""; for (i=0; ireturn result > 
String.prototype.hexDecode = function()< var j; var hexes = this.match(/./g) || []; var back = ""; for(j = 0; j return back; > 

Here is a tweak of McDowell’s algorithm that doesn’t pad the result:

 function toHex(str) < var result = ''; for (var i=0; ireturn result; > 

It depends on what encoding you use. If you want to convert utf-8 encoded hex to string, use this:

function fromHex(hex,str) < try< str = decodeURIComponent(hex.replace(/(..)/g,'%$1')) >catch(e) < str = hex console.log('invalid hex input: ' + hex) >return str > 

For the other direction use this:

function toHex(str,hex)< try< hex = unescape(encodeURIComponent(str)) .split('').map(function(v)< return v.charCodeAt(0).toString(16) >).join('') > catch(e) < hex = str console.log('invalid text input: ' + str) >return hex > 

A more up to date solution, for encoding:

// This is the same for all of the below, and // you probably won't need it except for debugging // in most cases. function bytesToHex(bytes) < return Array.from( bytes, byte =>byte.toString(16).padStart(2, "0") ).join(""); > // You almost certainly want UTF-8, which is // now natively supported: function stringToUTF8Bytes(string) < return new TextEncoder().encode(string); >// But you might want UTF-16 for some reason. // .charCodeAt(index) will return the underlying // UTF-16 code-units (not code-points!), so you // just need to format them in whichever endian order you want. function stringToUTF16Bytes(string, littleEndian) < const bytes = new Uint8Array(string.length * 2); // Using DataView is the only way to get a specific // endianness. const view = new DataView(bytes.buffer); for (let i = 0; i != string.length; i++) < view.setUint16(i, string.charCodeAt(i), littleEndian); >return bytes; > // And you might want UTF-32 in even weirder cases. // Fortunately, iterating a string gives the code // points, which are identical to the UTF-32 encoding, // though you still have the endianess issue. function stringToUTF32Bytes(string, littleEndian) < const codepoints = Array.from(string, c =>c.codePointAt(0)); const bytes = new Uint8Array(codepoints.length * 4); // Using DataView is the only way to get a specific // endianness. const view = new DataView(bytes.buffer); for (let i = 0; i != codepoints.length; i++) < view.setUint32(i, codepoints[i], littleEndian); >return bytes; > 
bytesToHex(stringToUTF8Bytes("hello ?? ")) // "68656c6c6f20e6bca2e5ad9720f09f918d" bytesToHex(stringToUTF16Bytes("hello ?? ", false)) // "00680065006c006c006f00206f225b570020d83ddc4d" bytesToHex(stringToUTF16Bytes("hello ?? ", true)) // "680065006c006c006f002000226f575b20003dd84ddc" bytesToHex(stringToUTF32Bytes("hello ?? ", false)) // "00000068000000650000006c0000006c0000006f0000002000006f2200005b57000000200001f44d" bytesToHex(stringToUTF32Bytes("hello ?? ", true)) // "68000000650000006c0000006c0000006f00000020000000226f0000575b0000200000004df40100" 

For decoding, it’s generally a lot simpler, you just need:

function hexToBytes(hex) < const bytes = new Uint8Array(hex.length / 2); for (let i = 0; i !== bytes.length; i++) < bytes[i] = parseInt(hex.substr(i * 2, 2), 16); >return bytes; > 

then use the encoding parameter of TextDecoder :

// UTF-8 is default new TextDecoder().decode(hexToBytes("68656c6c6f20e6bca2e5ad9720f09f918d")); // but you can also use: new TextDecoder("UTF-16LE").decode(hexToBytes("680065006c006c006f002000226f575b20003dd84ddc")) new TextDecoder("UTF-16BE").decode(hexToBytes("00680065006c006c006f00206f225b570020d83ddc4d")); // "hello ?? " 

You might notice UTF-32 is not on that list, which is a pain, so:

function bytesToStringUTF32(bytes, littleEndian) < const view = new DataView(bytes.buffer); const codepoints = new Uint32Array(view.byteLength / 4); for (let i = 0; i !== codepoints.length; i++) < codepoints[i] = view.getUint32(i * 4, littleEndian); >return String.fromCodePoint(. codepoints); > 
bytesToStringUTF32(hexToBytes("00000068000000650000006c0000006c0000006f0000002000006f2200005b57000000200001f44d"), false) bytesToStringUTF32(hexToBytes("68000000650000006c0000006c0000006f00000020000000226f0000575b0000200000004df40100"), true) // "hello ?? " 

how do you get «\u6f22\u5b57» from ?? in JavaScript?

These are JavaScript Unicode escape sequences e.g. \u12AB . To convert them, you could iterate over every code unit in the string, call .toString(16) on it, and go from there.

Читайте также:  Sorting dictionaries in python by key

However, it is more efficient to also use hexadecimal escape sequences e.g. \xAA in the output wherever possible.

Also note that ASCII symbols such as A , b , and — probably don’t need to be escaped.

I’ve written a small JavaScript library that does all this for you, called jsesc . It has lots of options to control the output.

Your question was tagged as utf-8 . Reading the rest of your question, UTF-8 encoding/decoding didn’t seem to be what you wanted here, but in case you ever need it: use utf8.js (online demo).

"??".split("").reduce((hex,c)=>hex+=c.charCodeAt(0).toString(16).padStart(4,"0"),"") 
"hi".split("").reduce((hex,c)=>hex+=c.charCodeAt(0).toString(16).padStart(2,"0"),"") 

ASCII (utf-8) binary HEX string to string

"68656c6c6f20776f726c6421".match(/./g).reduce((acc,char)=>acc+String.fromCharCode(parseInt(char, 16)),"") 

String to ASCII (utf-8) binary HEX string

"hello world!".split("").reduce((hex,c)=>hex+=c.charCodeAt(0).toString(16).padStart(2,"0"),"") 

String to UNICODE (utf-16) binary HEX string

"hello world!".split("").reduce((hex,c)=>hex+=c.charCodeAt(0).toString(16).padStart(4,"0"),"") 

UNICODE (utf-16) binary HEX string to string

"00680065006c006c006f00200077006f0072006c00640021".match(/./g).reduce((acc,char)=>acc+String.fromCharCode(parseInt(char, 16)),"") 

Here is my take: these functions convert a UTF8 string to a proper HEX without the extra zeroes padding. A real UTF8 string has characters with 1, 2, 3 and 4 bytes length.

While working on this I found a couple key things that solved my problems:

  1. str.split(») doesn’t handle multi-byte characters like emojis correctly. The proper/modern way to handle this is with Array.from(str)
  2. encodeURIComponent() and decodeURIComponent() are great tools to convert between string and hex. They are pretty standard, they handle UTF8 correctly.
  3. (Most) ASCII characters (codes 0 — 127) don’t get URI encoded, so they need to handled separately. But c.charCodeAt(0).toString(16) works perfectly for those
 function utf8ToHex(str) < return Array.from(str).map(c =>c.charCodeAt(0) < 128 ? c.charCodeAt(0).toString(16) : encodeURIComponent(c).replace(/\%/g,'').toLowerCase() ).join(''); >, function hexToUtf8: function(hex) < return decodeURIComponent('%' + hex.match(/./g).join('%')); > 

Источник

Оцените статью