Parsing using a Graphics Processing Unit (GPU)?

Roger L Costello <>
Mon, 31 Aug 2020 10:35:53 +0000

          From comp.compilers

Related articles
Parsing using a Graphics Processing Unit (GPU)? (Roger L Costello) (2020-08-31)
Re: Parsing using a Graphics Processing Unit (GPU)? (Christian Gollwitzer) (2020-09-01)
Re: Parsing using a Graphics Processing Unit (GPU)? (A. K.) (2020-09-01)
Re: Parsing using a Graphics Processing Unit (GPU)? (Hans-Peter Diettrich) (2020-09-01)
Parsing using a Graphics Processing Unit (GPU)? (Christopher F Clark) (2020-09-02)
Re: Parsing using a Graphics Processing Unit (GPU)? (Elijah Stone) (2020-09-01)
Re: Parsing using a Graphics Processing Unit (GPU)? (2020-09-02)
[6 later articles]
| List of all articles for this month |

From: Roger L Costello <>
Newsgroups: comp.compilers
Date: Mon, 31 Aug 2020 10:35:53 +0000
Organization: Compilers Central
Injection-Info:; posting-host=""; logging-data="84131"; mail-complaints-to=""
Keywords: parse, performance, comment
Posted-Date: 01 Sep 2020 00:44:51 EDT

Hi Folks,

I am reading a book [1] on machine learning and the book says some pretty
interesting things:

"In the search for more speed, machine learning researchers started taking
advantage of special hardware found in some computers, originally designed to
improve graphics performance. You may have heard these called graphics cards.
... Those graphics cards contain a GPU, or graphics processing unit. Unlike a
general purpose CPU, a GPU is designed to perform specific tasks, and do them
well. One of those tasks is to carry out arithmetic, including matrix
multiplication, in a highly parallel way. ... GPUs have many more [than CPUs]
arithmetic cores, thousands are fairly common today. This means a huge
workload can be split amongst all those cores and the job can be done


Has the parsing community found a way to take advantage of GPUs?

From the above excerpt, it appears that GPUs are especially good at
arithmetic. When I think of parsing, I don't think of lots of arithmetic.
Perhaps someone has devised a way to recast the parsing problem into an
arithmetic problem?

Any thoughts you might have on:

(a) parsing-using-GPUs, and
(b) recasting-the-parsing-problem-into-an-arithmetic-problem

would be appreciated. /Roger

[1] "Make Your First GAN with Pytorch" by Tariq Rashid

[Parsing is not usually an important factor in compiler performance.
The slow parts are the lexer, because it has to look at every
character of the input, and some optimizations that have to analyze
the entire intermediate form of the program. The first step in lexing
is to identify what class each character is, e.g., identifier, white
space, or operator. Perhaps a GPU could do vector lookups to speed
that up. For optimizations, I can sort of imagine how some analyses
like reachability might be expressible as matrices. -John]

Post a followup to this message

Return to the comp.compilers page.
Search the comp.compilers archives again.