Compare commits

..

100 Commits

Author SHA1 Message Date
06a99ce232 Implement new tokens
I think this is all of them. The test will tell.
2022-08-30 01:02:49 -04:00
Sasha Koshka
1c2194b68a Added text descriptions for new tokens 2022-08-25 23:21:00 -04:00
Sasha Koshka
453a596587 Added test case for new tokens 2022-08-25 23:17:42 -04:00
c3b6330b22 Added base function parsing 2022-08-25 20:01:12 -04:00
723b506005 Added test code for function sections 2022-08-25 16:08:18 -04:00
6bbee2e13b Created comprehensive test case 2022-08-25 15:46:35 -04:00
Sasha Koshka
9fd285920b Began writing test case for function sections 2022-08-25 13:31:09 -04:00
Sasha Koshka
e630ec6f04 Added function section to tree 2022-08-25 12:02:43 -04:00
Sasha Koshka
0ac71fa1c3 Added progress heatmap 2022-08-25 00:51:54 -04:00
Sasha Koshka
9232432c35 Implemented thos bad boys 2022-08-25 00:05:40 -04:00
Sasha Koshka
b536b01eeb Added new tokens to test case 2022-08-25 00:01:28 -04:00
Sasha Koshka
8175a9d4c5 Added some more tokens to the TokenKind enum 2022-08-24 23:58:21 -04:00
Sasha Koshka
3dd2ea83d3 I forgot the 2022-08-24 23:55:34 -04:00
Sasha Koshka
b7631530bc yeah 2022-08-24 23:54:06 -04:00
Sasha Koshka
fa1d8efe55 Its just as I feared. Identifier parsing doesn't work! 2022-08-24 23:50:16 -04:00
e74aff3299 Merge pull request 'tree-cleanup' (#9) from tree-cleanup into main
Reviewed-on: #9
2022-08-25 00:24:41 +00:00
Sasha Koshka
89a60e620e Altered objt section test case to not be alphabetically organized 2022-08-24 20:20:55 -04:00
Sasha Koshka
cd528552c8 Object sections now parse members into array 2022-08-24 20:19:14 -04:00
Sasha Koshka
067bf2f4df Altered tree so that object members are stored in an array 2022-08-24 20:09:57 -04:00
777c8df6a4 Changed the logo color because why not lol 2022-08-24 18:57:45 -04:00
c470997887 Did the same thing to interfaces 2022-08-24 18:57:07 -04:00
715766edb4 Objects can only inherit by specifiying an identifier 2022-08-24 18:52:31 -04:00
821fa0ecb3 Merge pull request 'objt-bitfields' (#8) from objt-bitfields into main
Reviewed-on: #8
2022-08-24 22:46:31 +00:00
e316eb7791 Changed bit field syntax to use an & symbol 2022-08-24 18:37:44 -04:00
731cc828ce Added untested bit width parsing 2022-08-24 18:29:15 -04:00
05aa0e6177 Added bitfields to object section test case 2022-08-24 18:23:11 -04:00
fb43f96acc Added bit fields to tree and ToString for object member 2022-08-24 18:22:47 -04:00
Sasha Koshka
b64fbd9fc4 Split tests into multiple files
This should make it easier to work on sections independantly of one another
without creating merge conflicts
2022-08-24 01:22:24 -04:00
Sasha Koshka
0d366964ca Enum members are now ordered 2022-08-24 01:16:44 -04:00
a5477717eb Merge pull request 'face-section' (#7) from face-section into main
Reviewed-on: #7
2022-08-24 04:57:14 +00:00
Sasha Koshka
0b80a55f79 Repaired output formatting of interface section 2022-08-24 00:53:42 -04:00
Sasha Koshka
08935d69c0 Parser actually adds interface behavior to interface 2022-08-24 00:52:37 -04:00
Sasha Koshka
39f8d7e4ac Fixed parsing of interface section behaviors 2022-08-24 00:25:52 -04:00
Sasha Koshka
1f88b54eaa Face sections are actually ToString'd now 2022-08-23 22:25:21 -04:00
b0d4ecc83f Added interface section parsing to body 2022-08-23 15:14:44 -04:00
4eac5c67aa Added untested interface section parsing 2022-08-23 15:13:00 -04:00
441b036a1c Updated test case to reflect previous commit 2022-08-23 14:07:56 -04:00
8817d72cb3 Interfaces can inherit other interfaces 2022-08-23 13:56:59 -04:00
3ef1e706b3 Added ToString method to face section 2022-08-23 13:54:44 -04:00
944fc8514e Add correct output for face test case 2022-08-23 13:46:20 -04:00
Sasha Koshka
cd55a0ad8d Add interface section to tree 2022-08-23 10:56:37 -04:00
Sasha Koshka
f95c7e0b1c Basic test file for interface section 2022-08-23 10:55:50 -04:00
15d1b602b3 Merge pull request 'enum-section' (#6) from enum-section into main
Reviewed-on: #6
2022-08-23 05:38:55 +00:00
Sasha Koshka
c29efd97ba Organized test case members alphabetically 2022-08-23 01:36:40 -04:00
Sasha Koshka
aa84d9a429 Removed space alignment and hex literals from test case check
ToString is not capable of producing this
2022-08-23 01:35:35 -04:00
Sasha Koshka
5dcf3b3d1a Fixed ToString formatting of enum 2022-08-23 01:33:28 -04:00
Sasha Koshka
d8074fa5cb Enum default values are now parsed properly
Previously the parser would stay on the member name and parse it the default
value. It now moves forward and catches the actual default value.
2022-08-23 01:30:56 -04:00
Sasha Koshka
6a6fe8353e Add untested enum parsing 2022-08-21 11:17:56 -04:00
Sasha Koshka
c4f763af5b Added test case for enum section 2022-08-21 02:48:36 -04:00
Sasha Koshka
6fbda34300 Add base enum parsing method 2022-08-21 02:42:25 -04:00
Sasha Koshka
59126f60cc Added enum sections to tree 2022-08-21 02:40:04 -04:00
Sasha Koshka
ca80a5968d Cleaned up example code and made it up-to-date 2022-08-20 15:54:10 -04:00
61819311e9 Merge pull request 'objt-section' (#5) from objt-section into main
Reviewed-on: #5
2022-08-20 19:47:44 +00:00
Sasha Koshka
f3b2d11f59 I swear its not my code thats wrong its the test
No like literally this keeps happening
2022-08-20 15:45:45 -04:00
Sasha Koshka
3900bbe7bf Parser test cases now print out line numbers 2022-08-20 15:45:01 -04:00
Sasha Koshka
b878017b81 The last item of object sections is now saved. 2022-08-20 15:22:25 -04:00
Sasha Koshka
5271876196 Changed data in object test to use objt keyword instead of type 2022-08-20 13:46:10 -04:00
Sasha Koshka
617d76fc46 Object sections now parse properly 2022-08-20 13:43:10 -04:00
Sasha Koshka
0ceaedbcd8 Object sections now ToString properly 2022-08-20 13:42:09 -04:00
Sasha Koshka
edb9c1a0b6 Fixed assignment to entry in nil map 2022-08-20 13:29:04 -04:00
Sasha Koshka
bd433fc65d Untested object section parsing 2022-08-20 13:26:24 -04:00
Sasha Koshka
c847d2187d Fixed the object section test 2022-08-20 13:25:43 -04:00
Sasha Koshka
cb2264977a Added object sections to the tree for real lol 2022-08-20 13:24:56 -04:00
Sasha Koshka
790e7e632e Removed recursive member parsing nonsense from type section 2022-08-20 12:50:32 -04:00
Sasha Koshka
fc1568aece Updated ToString methods to match new tree structure 2022-08-20 12:40:44 -04:00
Sasha Koshka
222c47ced9 Altered tree to separate object and blind type definitions 2022-08-20 02:46:40 -04:00
Sasha Koshka
da6d587a48 Split test cases between blind types and objt types 2022-08-20 02:42:52 -04:00
018499310c Merge pull request 'type-section' (#4) from type-section into main
Reviewed-on: #4
2022-08-20 02:06:44 +00:00
Sasha Koshka
78b8b9dacd Fixed test case for parser
The correct output string was missing a type specifier. The lexer now passes
this test.
2022-08-19 11:37:30 -04:00
Sasha Koshka
2605d1fb09 Fixed nested complex initialization values not parsing 2022-08-19 11:36:30 -04:00
Sasha Koshka
9dce9b2f75 Fixed test formatting 2022-08-19 03:05:25 -04:00
Sasha Koshka
9b4279c052 Fixed ToString of type nodes 2022-08-19 03:03:36 -04:00
Sasha Koshka
2296765e81 Added recursive parsing of type nodes 2022-08-19 03:01:47 -04:00
Sasha Koshka
19d0b3f455 Complex default values of type nodes now ToString properly 2022-08-19 02:36:56 -04:00
Sasha Koshka
e25e7bdf14 Parser can now parse array and object initializations 2022-08-19 02:34:17 -04:00
Sasha Koshka
63419165dd Moved most of type section parsing into reusable type node parsing method 2022-08-19 02:08:18 -04:00
Sasha Koshka
69aaae8f14 Restructured type definitions to use a node tree 2022-08-18 23:38:32 -04:00
Sasha Koshka
717474a59e Removed unnescessary println statements (oopsie) 2022-08-18 20:09:27 -04:00
Sasha Koshka
ef90115a1b Fixed some test case formatting 2022-08-18 20:09:04 -04:00
Sasha Koshka
cced825f74 Changed this one thing to the other thing 2022-08-18 19:40:35 -04:00
Sasha Koshka
9fd3fb1263 Added basic ToString method to TypeSection 2022-08-18 17:45:34 -04:00
Sasha Koshka
5c2a7aeb07 Created base for type section parsing 2022-08-18 17:39:19 -04:00
Sasha Koshka
bc9beb0317 Created test case for type section 2022-08-18 16:56:42 -04:00
Sasha Koshka
a548dcc585 Changed permission codes to only determine private/public/readonly
Changing permissions within the module was redundant and would have just
conflicted with the :mut type qualifier. This is easier to understand.
2022-08-18 12:09:17 -04:00
Sasha Koshka
15eb96e8ac Lexer passes all width tests 2022-08-18 11:35:48 -04:00
Sasha Koshka
120976a0f3 Numbers now tokenize with the correct width 2022-08-18 11:32:50 -04:00
Sasha Koshka
bde4bf8493 String and rune literals now have correct width 2022-08-18 11:25:40 -04:00
Sasha Koshka
a013d4caad Lexer tests now check token width 2022-08-18 11:14:42 -04:00
Sasha Koshka
be9a3603d2 Made structural change to lexer test definitions 2022-08-18 11:02:49 -04:00
Sasha Koshka
54de3d1270 Fixed test columns and widths 2022-08-18 02:06:00 -04:00
Sasha Koshka
a87973c141 Error widths now work properly 2022-08-18 02:04:49 -04:00
Sasha Koshka
85996b2554 Added more error test cases 2022-08-18 01:47:35 -04:00
Sasha Koshka
4780d9cc28 Fixed bug in file where it would report its location one step ahead 2022-08-18 01:35:46 -04:00
Sasha Koshka
bb89009742 Add description method to Location 2022-08-18 01:31:01 -04:00
Sasha Koshka
9e66305001 Created test to check lexer errors 2022-08-18 01:25:02 -04:00
Sasha Koshka
39e4fbe844 Replaced references to file.Error with infoerr.Error 2022-08-18 00:58:40 -04:00
Sasha Koshka
d42d0c5b34 Renamed error module to infoerr 2022-08-18 00:56:45 -04:00
Sasha Koshka
ca5f8202bb Put Error in its own module 2022-08-18 00:51:19 -04:00
Sasha Koshka
abc6e44fb2 Removed Location's dependency on Error 2022-08-18 00:50:57 -04:00
Sasha Koshka
cce841f48e Add getters to File 2022-08-18 00:50:39 -04:00
46 changed files with 1787 additions and 338 deletions

View File

@@ -16,7 +16,7 @@ A directory of ARF files is called a module, and modules will compile to object
files (one per module) using C as an intermediate language (maybe LLVM IR in the files (one per module) using C as an intermediate language (maybe LLVM IR in the
future). future).
## Design aspects ## Design Aspects
These are some design goals that I have followed/am following: These are some design goals that I have followed/am following:
@@ -32,7 +32,7 @@ These are some design goals that I have followed/am following:
- One line at a time - the language's syntax should encourage writing code that - One line at a time - the language's syntax should encourage writing code that
flows vertically and not horizontally, with minimal nesting flows vertically and not horizontally, with minimal nesting
## Planned features ## Planned Features
- Type definition through inheritence - Type definition through inheritence
- Struct member functions - Struct member functions
@@ -49,3 +49,11 @@ These are some design goals that I have followed/am following:
- [ ] Semantic tree -> C -> object file - [ ] Semantic tree -> C -> object file
- [ ] Figure out HOW to implement generics - [ ] Figure out HOW to implement generics
- [ ] Create a standard library - [ ] Create a standard library
## Compiler Progress
<img src="assets/heatmap.png" alt="Progress heatmap" width="400">
- Yellow: needs to be completed for the MVP
- Lime: ongoing progress in this area
- Green: Already completed

BIN
assets/heatmap.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@@ -1,8 +1,8 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 288 80" width="288" height="80"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 288 80" width="288" height="80">
<path d="M48 0L112 0L112 32L96 32L96 16L56 16L40 32L16 32L48 0Z" fill="#bf616a" fill-rule="evenodd" opacity="1" stroke="none"/> <path d="M48 0L112 0L112 32L96 32L96 16L56 16L40 32L16 32L48 0Z" fill="#b81414" fill-rule="evenodd" opacity="1" stroke="none"/>
<path d="M96 64L136 64L136 80L104 80L96 72L96 64Z" fill="#bf616a" fill-rule="evenodd" opacity="1" stroke="none"/> <path d="M96 64L136 64L136 80L104 80L96 72L96 64Z" fill="#b81414" fill-rule="evenodd" opacity="1" stroke="none"/>
<path d="M120 0L120 32L136 32L136 16L184 16L184 32L176 40L8 40L0 48L0 56L184 56L200 40L200 32L200 0L120 0Z" fill="#bf616a" fill-rule="evenodd" opacity="1" stroke="none"/> <path d="M120 0L120 32L136 32L136 16L184 16L184 32L176 40L8 40L0 48L0 56L184 56L200 40L200 32L200 0L120 0Z" fill="#b81414" fill-rule="evenodd" opacity="1" stroke="none"/>
<path d="M191 61L204 48L236 80L210 80L191 61Z" fill="#bf616a" fill-rule="evenodd" opacity="1" stroke="none"/> <path d="M191 61L204 48L236 80L210 80L191 61Z" fill="#b81414" fill-rule="evenodd" opacity="1" stroke="none"/>
<path d="M256 40L208 40L224 56L256 56L256 40Z" fill="#bf616a" fill-rule="evenodd" opacity="1" stroke="none"/> <path d="M256 40L208 40L224 56L256 56L256 40Z" fill="#b81414" fill-rule="evenodd" opacity="1" stroke="none"/>
<path d="M208 0L288 0L288 16L224 16L224 32L208 32L208 0Z" fill="#bf616a" fill-rule="evenodd" opacity="1" stroke="none"/> <path d="M208 0L288 0L288 16L224 16L224 32L208 32L208 0Z" fill="#b81414" fill-rule="evenodd" opacity="1" stroke="none"/>
</svg> </svg>

Before

Width:  |  Height:  |  Size: 847 B

After

Width:  |  Height:  |  Size: 847 B

View File

@@ -1,3 +1,9 @@
:arf :arf
require "io"
---
func rr main func ro main
> arguments:{String}
< status:Int 0
---
io.println "hello world"

View File

@@ -5,37 +5,31 @@ require "io"
--- ---
# this is a global variable # this is a global variable
data wn helloText:String "Hello, world!" data pv helloText:String "Hello, world!"
# this is a struct definition # this is a struct definition
type rr Greeter:Obj objt ro Greeter:Obj
# "Hi." is a string constant. all Greeters will be initialized with a rw text:String "Hi."
# pointer to it. I don't know really it depends on what I decide that
# a String type even is.
wr text:String "Hi."
"sdfdsf" "ahh"
"asdf"
# this is a function # this is a function
func rr main func ro main
> argc:Int > arguments:{String}
> argv:{String} < status:Int 0
< status:Int 0 ---
--- set greeter:Greeter:mut
let greeter:Greeter:mut greeter.setText helloText
greeter.setText helloText greeter.greet
greeter.greet
# this is a member function # this is a member function
func rr greet func ro greet
@ greeter:{Greeter} @ greeter:{Greeter}
--- ---
io.println greeter.text io.println greeter.text
# this is mutator member function # this is mutator member function
func rr setText func ro setText
@ greeter:{Greeter} @ greeter:{Greeter}
> text:String > text:String
--- ---
greeter.text.set text greeter.text.set text

View File

@@ -1,11 +0,0 @@
:arf
require io
---
func rr main
> argc:Int
> argv:{String}
< status:Int
---
io.println [io.readln]
= status 0

View File

@@ -1,13 +0,0 @@
:arf
---
data:{Int 6}
-39480 398 29 0x3AFe3 0b10001010110 0o666
func rr literals
---
= stringLiteral:String "skadjlsakdj"
= intArrayLiteral:{Int 3} 2398
-2938 324
= runeLiteral:Rune 'a'
= floatArrayLiteral:{F64 5} 3248.23 0.324 -94.29

21
face_test.go Normal file
View File

@@ -0,0 +1,21 @@
package parser
import "testing"
func TestFace (test *testing.T) {
checkTree ("../tests/parser/face",
`:arf
---
face ro Destroyer:Face
destroy
face ro ReadWriter:Face
read
> into:{Byte ..}
< read:Int
< err:Error
write
> data:{Byte ..}
< wrote:Int
< err:Error
`, test)
}

View File

@@ -8,6 +8,8 @@ type File struct {
path string path string
file *os.File file *os.File
reader *bufio.Reader reader *bufio.Reader
realLine int
realColumn int
currentLine int currentLine int
currentColumn int currentColumn int
lines []string lines []string
@@ -42,6 +44,9 @@ func (file *File) Read (bytes []byte) (amountRead int, err error) {
// store the character in the file // store the character in the file
for _, char := range bytes { for _, char := range bytes {
file.realLine = file.currentLine
file.realColumn = file.currentColumn
if char == '\n' { if char == '\n' {
file.lines = append(file.lines, "") file.lines = append(file.lines, "")
file.currentLine ++ file.currentLine ++
@@ -61,6 +66,9 @@ func (file *File) Read (bytes []byte) (amountRead int, err error) {
func (file *File) ReadRune () (char rune, size int, err error) { func (file *File) ReadRune () (char rune, size int, err error) {
char, size, err = file.reader.ReadRune() char, size, err = file.reader.ReadRune()
file.realLine = file.currentLine
file.realColumn = file.currentColumn
if char == '\n' { if char == '\n' {
file.lines = append(file.lines, "") file.lines = append(file.lines, "")
file.currentLine ++ file.currentLine ++
@@ -106,8 +114,18 @@ func (file *File) Close () {
func (file *File) Location (width int) (location Location) { func (file *File) Location (width int) (location Location) {
return Location { return Location {
file: file, file: file,
row: file.currentLine, row: file.realLine,
column: file.currentColumn, column: file.realColumn,
width: width, width: width,
} }
} }
// Path returns the path that teh file is located at.
func (file *File) Path () (path string) {
return file.path
}
// GetLine returns the line at the specified index.
func (file *File) GetLine (index int) (line string) {
return file.lines[index]
}

View File

@@ -1,5 +1,7 @@
package file package file
import "fmt"
// Location represents a specific point in a file. It is used for error // Location represents a specific point in a file. It is used for error
// reporting. // reporting.
type Location struct { type Location struct {
@@ -9,11 +11,6 @@ type Location struct {
width int width int
} }
// NewError creates a new error at this location.
func (location Location) NewError (message string, kind ErrorKind) (err Error) {
return NewError(location, message, kind)
}
// File returns the file the location is in // File returns the file the location is in
func (location Location) File () (file *File) { func (location Location) File () (file *File) {
return location.file return location.file
@@ -36,3 +33,17 @@ func (location Location) Column () (column int) {
func (location Location) Width () (width int) { func (location Location) Width () (width int) {
return location.width return location.width
} }
// SetWidth sets the location's width
func (location *Location) SetWidth (width int) {
location.width = width
}
// Describe generates a description of the location for debug purposes
func (location Location) Describe () (description string) {
return fmt.Sprint (
"in ", location.file.Path(),
" row ", location.row,
" column ", location.column,
" width ", location.width)
}

View File

@@ -1,7 +1,8 @@
package file package infoerr
import "os" import "os"
import "fmt" import "fmt"
import "git.tebibyte.media/sashakoshka/arf/file"
type ErrorKind int type ErrorKind int
@@ -11,14 +12,14 @@ const (
) )
type Error struct { type Error struct {
Location file.Location
message string message string
kind ErrorKind kind ErrorKind
} }
// NewError creates a new error at the specified location. // NewError creates a new error at the specified location.
func NewError ( func NewError (
location Location, location file.Location,
message string, message string,
kind ErrorKind, kind ErrorKind,
) ( ) (
@@ -41,24 +42,23 @@ func (err Error) Error () (formattedMessage string) {
} }
// print information about the location of the mistake // print information about the location of the mistake
if err.width > 0 { if err.Width() > 0 {
formattedMessage += fmt.Sprint ( formattedMessage += fmt.Sprint (
" \033[34m", err.Location.row + 1, " \033[34m", err.Row() + 1,
":", err.Location.column + 1) ":", err.Column() + 1)
} }
formattedMessage += formattedMessage +=
" \033[90min\033[0m " + " \033[90min\033[0m " +
err.Location.file.path + "\n" err.File().Path() + "\n"
if err.width > 0 { if err.Width() > 0 {
// print erroneous line // print erroneous line
line := err.Location.file.lines[err.Location.row] line := err.File().GetLine(err.Row())
formattedMessage += formattedMessage += line + "\n"
err.Location.file.lines[err.Location.row] + "\n"
// position error marker // position error marker
var index int var index int
for index = 0; index < err.Location.column; index ++ { for index = 0; index < err.Column(); index ++ {
if line[index] == '\t' { if line[index] == '\t' {
formattedMessage += "\t" formattedMessage += "\t"
} else { } else {
@@ -67,7 +67,7 @@ func (err Error) Error () (formattedMessage string) {
} }
// print an arrow with a tail spanning the width of the mistake // print an arrow with a tail spanning the width of the mistake
for err.width > 1 { for index < err.Column() + err.Width() - 1 {
if line[index] == '\t' { if line[index] == '\t' {
formattedMessage += "--------" formattedMessage += "--------"
} else { } else {

View File

@@ -3,6 +3,7 @@ package lexer
import "io" import "io"
import "git.tebibyte.media/sashakoshka/arf/file" import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/types" import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// LexingOperation holds information about an ongoing lexing operataion. // LexingOperation holds information about an ongoing lexing operataion.
type LexingOperation struct { type LexingOperation struct {
@@ -34,10 +35,10 @@ func (lexer *LexingOperation) tokenize () (err error) {
err = lexer.nextRune() err = lexer.nextRune()
if err != nil || shebangCheck[index] != lexer.char { if err != nil || shebangCheck[index] != lexer.char {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"not an arf file", "not an arf file",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
} }
@@ -92,14 +93,14 @@ func (lexer *LexingOperation) tokenizeAlphaBeginning () (err error) {
} }
token.value = got token.value = got
token.location.SetWidth(len(got))
if len(got) == 2 { if len(got) == 2 {
firstValid := got[0] == 'n' || got[0] == 'r' || got[0] == 'w' permission, isPermission := types.PermissionFrom(got)
secondValid := got[1] == 'n' || got[1] == 'r' || got[1] == 'w'
if isPermission {
if firstValid && secondValid {
token.kind = TokenKindPermission token.kind = TokenKindPermission
token.value = types.PermissionFrom(got) token.value = permission
} }
} }
@@ -123,10 +124,10 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if !previousToken.Is(TokenKindNewline) { if !previousToken.Is(TokenKindNewline) {
err = lexer.nextRune() err = lexer.nextRune()
file.NewError ( infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"tab not used as indent", "tab not used as indent",
file.ErrorKindWarn).Print() infoerr.ErrorKindWarn).Print()
return return
} }
@@ -142,6 +143,7 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
} }
token.value = indentLevel token.value = indentLevel
token.location.SetWidth(indentLevel)
lexer.addToken(token) lexer.addToken(token)
case '\n': case '\n':
// line break // line break
@@ -182,6 +184,7 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if lexer.char == '.' { if lexer.char == '.' {
token.kind = TokenKindElipsis token.kind = TokenKindElipsis
err = lexer.nextRune() err = lexer.nextRune()
token.location.SetWidth(2)
} }
lexer.addToken(token) lexer.addToken(token)
case ',': case ',':
@@ -217,6 +220,7 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if lexer.char == '+' { if lexer.char == '+' {
token.kind = TokenKindIncrement token.kind = TokenKindIncrement
err = lexer.nextRune() err = lexer.nextRune()
token.location.SetWidth(2)
} }
lexer.addToken(token) lexer.addToken(token)
case '-': case '-':
@@ -238,17 +242,40 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
err = lexer.nextRune() err = lexer.nextRune()
case '!': case '!':
token := lexer.newToken() token := lexer.newToken()
token.kind = TokenKindExclamation
lexer.addToken(token)
err = lexer.nextRune() err = lexer.nextRune()
if err != nil { return }
token.kind = TokenKindExclamation
if lexer.char == '=' {
token.kind = TokenKindNotEqualTo
err = lexer.nextRune()
token.location.SetWidth(2)
}
lexer.addToken(token)
case '%': case '%':
token := lexer.newToken() token := lexer.newToken()
token.kind = TokenKindPercent
lexer.addToken(token)
err = lexer.nextRune() err = lexer.nextRune()
if err != nil { return }
token.kind = TokenKindPercent
if lexer.char == '=' {
token.kind = TokenKindPercentAssignment
err = lexer.nextRune()
token.location.SetWidth(2)
}
lexer.addToken(token)
case '~': case '~':
token := lexer.newToken() token := lexer.newToken()
err = lexer.nextRune()
if err != nil { return }
token.kind = TokenKindTilde token.kind = TokenKindTilde
if lexer.char == '=' {
token.kind = TokenKindTildeAssignment
err = lexer.nextRune()
token.location.SetWidth(2)
}
lexer.addToken(token)
case '=':
token := lexer.newToken()
token.kind = TokenKindEqualTo
lexer.addToken(token) lexer.addToken(token)
err = lexer.nextRune() err = lexer.nextRune()
case '<': case '<':
@@ -259,6 +286,16 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if lexer.char == '<' { if lexer.char == '<' {
token.kind = TokenKindLShift token.kind = TokenKindLShift
err = lexer.nextRune() err = lexer.nextRune()
token.location.SetWidth(2)
if lexer.char == '=' {
token.kind = TokenKindLShiftAssignment
err = lexer.nextRune()
token.location.SetWidth(2)
}
} else if lexer.char == '=' {
token.kind = TokenKindLessThanEqualTo
err = lexer.nextRune()
token.location.SetWidth(2)
} }
lexer.addToken(token) lexer.addToken(token)
case '>': case '>':
@@ -269,6 +306,16 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if lexer.char == '>' { if lexer.char == '>' {
token.kind = TokenKindRShift token.kind = TokenKindRShift
err = lexer.nextRune() err = lexer.nextRune()
token.location.SetWidth(2)
if lexer.char == '=' {
token.kind = TokenKindRShiftAssignment
err = lexer.nextRune()
token.location.SetWidth(2)
}
} else if lexer.char == '=' {
token.kind = TokenKindGreaterThanEqualTo
err = lexer.nextRune()
token.location.SetWidth(2)
} }
lexer.addToken(token) lexer.addToken(token)
case '|': case '|':
@@ -279,6 +326,11 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if lexer.char == '|' { if lexer.char == '|' {
token.kind = TokenKindLogicalOr token.kind = TokenKindLogicalOr
err = lexer.nextRune() err = lexer.nextRune()
token.location.SetWidth(2)
} else if lexer.char == '=' {
token.kind = TokenKindBinaryOrAssignment
err = lexer.nextRune()
token.location.SetWidth(2)
} }
lexer.addToken(token) lexer.addToken(token)
case '&': case '&':
@@ -289,14 +341,19 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
if lexer.char == '&' { if lexer.char == '&' {
token.kind = TokenKindLogicalAnd token.kind = TokenKindLogicalAnd
err = lexer.nextRune() err = lexer.nextRune()
token.location.SetWidth(2)
} else if lexer.char == '=' {
token.kind = TokenKindBinaryAndAssignment
err = lexer.nextRune()
token.location.SetWidth(2)
} }
lexer.addToken(token) lexer.addToken(token)
default: default:
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"unexpected symbol character " + "unexpected symbol character " +
string(lexer.char), string(lexer.char),
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
@@ -310,6 +367,7 @@ func (lexer *LexingOperation) tokenizeDashBeginning () (err error) {
if lexer.char == '-' { if lexer.char == '-' {
token := lexer.newToken() token := lexer.newToken()
token.kind = TokenKindDecrement token.kind = TokenKindDecrement
token.location.SetWidth(2)
err = lexer.nextRune() err = lexer.nextRune()
if err != nil { return } if err != nil { return }
@@ -317,11 +375,13 @@ func (lexer *LexingOperation) tokenizeDashBeginning () (err error) {
if lexer.char == '-' { if lexer.char == '-' {
token.kind = TokenKindSeparator token.kind = TokenKindSeparator
lexer.nextRune() lexer.nextRune()
token.location.SetWidth(3)
} }
lexer.addToken(token) lexer.addToken(token)
} else if lexer.char == '>' { } else if lexer.char == '>' {
token := lexer.newToken() token := lexer.newToken()
token.kind = TokenKindReturnDirection token.kind = TokenKindReturnDirection
token.location.SetWidth(2)
err = lexer.nextRune() err = lexer.nextRune()
if err != nil { return } if err != nil { return }
@@ -362,9 +422,9 @@ func (lexer *LexingOperation) skipSpaces () (err error) {
func (lexer *LexingOperation) nextRune () (err error) { func (lexer *LexingOperation) nextRune () (err error) {
lexer.char, _, err = lexer.file.ReadRune() lexer.char, _, err = lexer.file.ReadRune()
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
return file.NewError ( return infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
err.Error(), file.ErrorKindError) err.Error(), infoerr.ErrorKindError)
} }
return return
} }

View File

@@ -3,8 +3,17 @@ package lexer
import "testing" import "testing"
import "git.tebibyte.media/sashakoshka/arf/file" import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/types" import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
func checkTokenSlice (filePath string, correct []Token, test *testing.T) { func quickToken (width int, kind TokenKind, value any) (token Token) {
token.location.SetWidth(width)
token.kind = kind
token.value = value
return
}
func checkTokenSlice (filePath string, test *testing.T, correct ...Token) {
test.Log("checking lexer results for", filePath)
file, err := file.Open(filePath) file, err := file.Open(filePath)
if err != nil { if err != nil {
test.Log(err) test.Log(err)
@@ -34,6 +43,15 @@ func checkTokenSlice (filePath string, correct []Token, test *testing.T) {
test.Log("token slice length match", len(tokens), "=", len(correct)) test.Log("token slice length match", len(tokens), "=", len(correct))
for index, token := range tokens { for index, token := range tokens {
if token.location.Width() != correct[index].location.Width() {
test.Log("token", index, "has bad width")
test.Log (
"have", token.location.Width(),
"want", correct[index].location.Width())
test.Fail()
return
}
if !token.Equals(correct[index]) { if !token.Equals(correct[index]) {
test.Log("token", index, "not equal") test.Log("token", index, "not equal")
test.Log ( test.Log (
@@ -46,118 +64,209 @@ func checkTokenSlice (filePath string, correct []Token, test *testing.T) {
test.Log("token slice content match") test.Log("token slice content match")
} }
func compareErr (
filePath string,
correctKind infoerr.ErrorKind,
correctMessage string,
correctRow int,
correctColumn int,
correctWidth int,
test *testing.T,
) {
test.Log("testing errors in", filePath)
file, err := file.Open(filePath)
if err != nil {
test.Log(err)
test.Fail()
return
}
_, err = Tokenize(file)
check := err.(infoerr.Error)
test.Log("error that was recieved:")
test.Log(check)
if check.Kind() != correctKind {
test.Log("mismatched error kind")
test.Log("- want:", correctKind)
test.Log("- have:", check.Kind())
test.Fail()
}
if check.Message() != correctMessage {
test.Log("mismatched error message")
test.Log("- want:", correctMessage)
test.Log("- have:", check.Message())
test.Fail()
}
if check.Row() != correctRow {
test.Log("mismatched error row")
test.Log("- want:", correctRow)
test.Log("- have:", check.Row())
test.Fail()
}
if check.Column() != correctColumn {
test.Log("mismatched error column")
test.Log("- want:", correctColumn)
test.Log("- have:", check.Column())
test.Fail()
}
if check.Width() != correctWidth {
test.Log("mismatched error width")
test.Log("- want:", check.Width())
test.Log("- have:", correctWidth)
test.Fail()
}
}
func TestTokenizeAll (test *testing.T) { func TestTokenizeAll (test *testing.T) {
checkTokenSlice("../tests/lexer/all.arf", []Token { checkTokenSlice("../tests/lexer/all.arf", test,
Token { kind: TokenKindSeparator }, quickToken(3, TokenKindSeparator, nil),
Token { kind: TokenKindPermission, value: types.Permission { quickToken(2, TokenKindPermission, types.PermissionReadWrite),
Internal: types.ModeRead, quickToken(2, TokenKindReturnDirection, nil),
External: types.ModeWrite, quickToken(10, TokenKindInt, int64(-349820394)),
}}, quickToken(9, TokenKindUInt, uint64(932748397)),
Token { kind: TokenKindReturnDirection }, quickToken(12, TokenKindFloat, 239485.37520),
Token { kind: TokenKindInt, value: int64(-349820394) }, quickToken(16, TokenKindString, "hello world!\n"),
Token { kind: TokenKindUInt, value: uint64(932748397) }, quickToken(3, TokenKindRune, 'E'),
Token { kind: TokenKindFloat, value: 239485.37520 }, quickToken(10, TokenKindName, "helloWorld"),
Token { kind: TokenKindString, value: "hello world!\n" }, quickToken(1, TokenKindColon, nil),
Token { kind: TokenKindRune, value: 'E' }, quickToken(1, TokenKindDot, nil),
Token { kind: TokenKindName, value: "helloWorld" }, quickToken(1, TokenKindComma, nil),
Token { kind: TokenKindColon }, quickToken(2, TokenKindElipsis, nil),
Token { kind: TokenKindDot }, quickToken(1, TokenKindLBracket, nil),
Token { kind: TokenKindComma }, quickToken(1, TokenKindRBracket, nil),
Token { kind: TokenKindElipsis }, quickToken(1, TokenKindLBrace, nil),
Token { kind: TokenKindLBracket }, quickToken(1, TokenKindRBrace, nil),
Token { kind: TokenKindRBracket }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindLBrace }, quickToken(1, TokenKindPlus, nil),
Token { kind: TokenKindRBrace }, quickToken(1, TokenKindMinus, nil),
Token { kind: TokenKindNewline }, quickToken(2, TokenKindIncrement, nil),
Token { kind: TokenKindPlus }, quickToken(2, TokenKindDecrement, nil),
Token { kind: TokenKindMinus }, quickToken(1, TokenKindAsterisk, nil),
Token { kind: TokenKindIncrement }, quickToken(1, TokenKindSlash, nil),
Token { kind: TokenKindDecrement }, quickToken(1, TokenKindAt, nil),
Token { kind: TokenKindAsterisk }, quickToken(1, TokenKindExclamation, nil),
Token { kind: TokenKindSlash }, quickToken(1, TokenKindPercent, nil),
Token { kind: TokenKindAt }, quickToken(2, TokenKindPercentAssignment, nil),
Token { kind: TokenKindExclamation }, quickToken(1, TokenKindTilde, nil),
Token { kind: TokenKindPercent }, quickToken(2, TokenKindTildeAssignment, nil),
Token { kind: TokenKindTilde }, quickToken(1, TokenKindEqualTo, nil),
Token { kind: TokenKindLessThan }, quickToken(2, TokenKindNotEqualTo, nil),
Token { kind: TokenKindLShift }, quickToken(1, TokenKindLessThan, nil),
Token { kind: TokenKindGreaterThan }, quickToken(2, TokenKindLessThanEqualTo, nil),
Token { kind: TokenKindRShift }, quickToken(2, TokenKindLShift, nil),
Token { kind: TokenKindBinaryOr }, quickToken(3, TokenKindLShiftAssignment, nil),
Token { kind: TokenKindLogicalOr }, quickToken(1, TokenKindGreaterThan, nil),
Token { kind: TokenKindBinaryAnd }, quickToken(2, TokenKindGreaterThanEqualTo, nil),
Token { kind: TokenKindLogicalAnd }, quickToken(2, TokenKindRShift, nil),
Token { kind: TokenKindNewline }, quickToken(3, TokenKindRShiftAssignment, nil),
}, test) quickToken(1, TokenKindBinaryOr, nil),
quickToken(2, TokenKindBinaryOrAssignment, nil),
quickToken(2, TokenKindLogicalOr, nil),
quickToken(1, TokenKindBinaryAnd, nil),
quickToken(2, TokenKindBinaryAndAssignment, nil),
quickToken(2, TokenKindLogicalAnd, nil),
quickToken(1, TokenKindBinaryXor, nil),
quickToken(2, TokenKindBinaryXorAssignment, nil),
quickToken(1, TokenKindNewline, nil),
)
} }
func TestTokenizeNumbers (test *testing.T) { func TestTokenizeNumbers (test *testing.T) {
checkTokenSlice("../tests/lexer/numbers.arf", []Token { checkTokenSlice("../tests/lexer/numbers.arf", test,
Token { kind: TokenKindUInt, value: uint64(0) }, quickToken(1, TokenKindUInt, uint64(0)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindUInt, value: uint64(8) }, quickToken(1, TokenKindUInt, uint64(8)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindUInt, value: uint64(83628266) }, quickToken(8, TokenKindUInt, uint64(83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindUInt, value: uint64(83628266) }, quickToken(29, TokenKindUInt, uint64(83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindUInt, value: uint64(83628266) }, quickToken(9, TokenKindUInt, uint64(83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindUInt, value: uint64(83628266) }, quickToken(10, TokenKindUInt, uint64(83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindInt, value: int64(-83628266) }, quickToken(9, TokenKindInt, int64(-83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindInt, value: int64(-83628266) }, quickToken(30, TokenKindInt, int64(-83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindInt, value: int64(-83628266) }, quickToken(10, TokenKindInt, int64(-83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindInt, value: int64(-83628266) }, quickToken(11, TokenKindInt, int64(-83628266)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindFloat, value: float64(0.123478) }, quickToken(8, TokenKindFloat, float64(0.123478)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindFloat, value: float64(234.3095) }, quickToken(8, TokenKindFloat, float64(234.3095)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindFloat, value: float64(-2.312) }, quickToken(6, TokenKindFloat, float64(-2.312)),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
}, test) )
} }
func TestTokenizeText (test *testing.T) { func TestTokenizeText (test *testing.T) {
checkTokenSlice("../tests/lexer/text.arf", []Token { checkTokenSlice("../tests/lexer/text.arf", test,
Token { kind: TokenKindString, value: "hello world!\a\b\f\n\r\t\v'\"\\" }, quickToken(34, TokenKindString, "hello world!\a\b\f\n\r\t\v'\"\\"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindRune, value: '\a' }, quickToken(4, TokenKindRune, '\a'),
Token { kind: TokenKindRune, value: '\b' }, quickToken(4, TokenKindRune, '\b'),
Token { kind: TokenKindRune, value: '\f' }, quickToken(4, TokenKindRune, '\f'),
Token { kind: TokenKindRune, value: '\n' }, quickToken(4, TokenKindRune, '\n'),
Token { kind: TokenKindRune, value: '\r' }, quickToken(4, TokenKindRune, '\r'),
Token { kind: TokenKindRune, value: '\t' }, quickToken(4, TokenKindRune, '\t'),
Token { kind: TokenKindRune, value: '\v' }, quickToken(4, TokenKindRune, '\v'),
Token { kind: TokenKindRune, value: '\'' }, quickToken(4, TokenKindRune, '\''),
Token { kind: TokenKindRune, value: '"' }, quickToken(4, TokenKindRune, '"' ),
Token { kind: TokenKindRune, value: '\\' }, quickToken(4, TokenKindRune, '\\'),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindString, value: "hello world \x40\u0040\U00000040!" }, quickToken(35, TokenKindString, "hello world \x40\u0040\U00000040!"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
}, test) )
} }
func TestTokenizeIndent (test *testing.T) { func TestTokenizeIndent (test *testing.T) {
checkTokenSlice("../tests/lexer/indent.arf", []Token { checkTokenSlice("../tests/lexer/indent.arf", test,
Token { kind: TokenKindName, value: "line1" }, quickToken(5, TokenKindName, "line1"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindIndent, value: 1 }, quickToken(1, TokenKindIndent, 1),
Token { kind: TokenKindName, value: "line2" }, quickToken(5, TokenKindName, "line2"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindIndent, value: 4 }, quickToken(4, TokenKindIndent, 4),
Token { kind: TokenKindName, value: "line3" }, quickToken(5, TokenKindName, "line3"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindName, value: "line4" }, quickToken(5, TokenKindName, "line4"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
Token { kind: TokenKindIndent, value: 2 }, quickToken(2, TokenKindIndent, 2),
Token { kind: TokenKindName, value: "line5" }, quickToken(5, TokenKindName, "line5"),
Token { kind: TokenKindNewline }, quickToken(1, TokenKindNewline, nil),
}, test) )
}
func TestTokenizeErr (test *testing.T) {
compareErr (
"../tests/lexer/error/unexpectedSymbol.arf",
infoerr.ErrorKindError,
"unexpected symbol character ;",
1, 5, 1,
test)
compareErr (
"../tests/lexer/error/excessDataRune.arf",
infoerr.ErrorKindError,
"excess data in rune literal",
1, 1, 7,
test)
compareErr (
"../tests/lexer/error/unknownEscape.arf",
infoerr.ErrorKindError,
"unknown escape character g",
1, 2, 1,
test)
} }

View File

@@ -1,32 +1,55 @@
package lexer package lexer
import "strconv" import "strconv"
import "git.tebibyte.media/sashakoshka/arf/file" import "git.tebibyte.media/sashakoshka/arf/infoerr"
// tokenizeSymbolBeginning lexes a token that starts with a number. // tokenizeSymbolBeginning lexes a token that starts with a number.
func (lexer *LexingOperation) tokenizeNumberBeginning (negative bool) (err error) { func (lexer *LexingOperation) tokenizeNumberBeginning (negative bool) (err error) {
var intNumber uint64 var intNumber uint64
var floatNumber float64 var floatNumber float64
var isFloat bool var isFloat bool
var amountRead int
var totalRead int
token := lexer.newToken() token := lexer.newToken()
if lexer.char == '0' { if lexer.char == '0' {
lexer.nextRune() lexer.nextRune()
totalRead ++
if lexer.char == 'x' { if lexer.char == 'x' {
lexer.nextRune() lexer.nextRune()
intNumber, floatNumber, isFloat, err = lexer.tokenizeNumber(16) totalRead ++
intNumber, floatNumber,
isFloat, amountRead,
err = lexer.tokenizeNumber(16)
} else if lexer.char == 'b' { } else if lexer.char == 'b' {
lexer.nextRune() lexer.nextRune()
intNumber, floatNumber, isFloat, err = lexer.tokenizeNumber(2) totalRead ++
intNumber, floatNumber,
isFloat, amountRead,
err = lexer.tokenizeNumber(2)
} else if lexer.char == '.' { } else if lexer.char == '.' {
intNumber, floatNumber, isFloat, err = lexer.tokenizeNumber(10) intNumber, floatNumber,
isFloat, amountRead,
err = lexer.tokenizeNumber(10)
} else if lexer.char >= '0' && lexer.char <= '9' { } else if lexer.char >= '0' && lexer.char <= '9' {
intNumber, floatNumber, isFloat, err = lexer.tokenizeNumber(8) intNumber, floatNumber,
isFloat, amountRead,
err = lexer.tokenizeNumber(8)
} }
} else { } else {
intNumber, floatNumber, isFloat, err = lexer.tokenizeNumber(10) intNumber, floatNumber,
isFloat, amountRead,
err = lexer.tokenizeNumber(10)
}
totalRead += amountRead
if negative {
totalRead += 1
} }
if err != nil { return } if err != nil { return }
@@ -47,7 +70,8 @@ func (lexer *LexingOperation) tokenizeNumberBeginning (negative bool) (err error
token.value = uint64(intNumber) token.value = uint64(intNumber)
} }
} }
token.location.SetWidth(totalRead)
lexer.addToken(token) lexer.addToken(token)
return return
} }
@@ -82,6 +106,7 @@ func (lexer *LexingOperation) tokenizeNumber (
intNumber uint64, intNumber uint64,
floatNumber float64, floatNumber float64,
isFloat bool, isFloat bool,
amountRead int,
err error, err error,
) { ) {
got := "" got := ""
@@ -89,10 +114,10 @@ func (lexer *LexingOperation) tokenizeNumber (
if !runeIsDigit(lexer.char, radix) { break } if !runeIsDigit(lexer.char, radix) { break }
if lexer.char == '.' { if lexer.char == '.' {
if radix != 10 { if radix != 10 {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"floats must have radix of 10", "floats must have radix of 10",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
isFloat = true isFloat = true
@@ -103,6 +128,8 @@ func (lexer *LexingOperation) tokenizeNumber (
if err != nil { return } if err != nil { return }
} }
amountRead = len(got)
if isFloat { if isFloat {
floatNumber, err = strconv.ParseFloat(got, 64) floatNumber, err = strconv.ParseFloat(got, 64)
} else { } else {
@@ -110,10 +137,10 @@ func (lexer *LexingOperation) tokenizeNumber (
} }
if err != nil { if err != nil {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"could not parse number: " + err.Error(), "could not parse number: " + err.Error(),
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }

View File

@@ -1,7 +1,7 @@
package lexer package lexer
import "strconv" import "strconv"
import "git.tebibyte.media/sashakoshka/arf/file" import "git.tebibyte.media/sashakoshka/arf/infoerr"
// tokenizeString tokenizes a string or rune literal. // tokenizeString tokenizes a string or rune literal.
func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) { func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
@@ -9,17 +9,20 @@ func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
if err != nil { return } if err != nil { return }
token := lexer.newToken() token := lexer.newToken()
got := ""
tokenWidth := 2
got := "" beginning := lexer.file.Location(1)
for { for {
// TODO: add hexadecimal escape codes
if lexer.char == '\\' { if lexer.char == '\\' {
err = lexer.nextRune() err = lexer.nextRune()
tokenWidth ++
if err != nil { return } if err != nil { return }
var actual rune var actual rune
actual, err = lexer.getEscapeSequence() var amountRead int
actual, amountRead, err = lexer.getEscapeSequence()
tokenWidth += amountRead
if err != nil { return } if err != nil { return }
got += string(actual) got += string(actual)
@@ -27,6 +30,7 @@ func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
got += string(lexer.char) got += string(lexer.char)
err = lexer.nextRune() err = lexer.nextRune()
tokenWidth ++
if err != nil { return } if err != nil { return }
} }
@@ -40,12 +44,13 @@ func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
err = lexer.nextRune() err = lexer.nextRune()
if err != nil { return } if err != nil { return }
beginning.SetWidth(len(got))
if isRuneLiteral { if isRuneLiteral {
if len(got) > 1 { if len(got) > 1 {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), beginning,
"excess data in rune literal", "excess data in rune literal",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
@@ -56,6 +61,7 @@ func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
token.value = got token.value = got
} }
token.location.SetWidth(tokenWidth)
lexer.addToken(token) lexer.addToken(token)
return return
} }
@@ -76,16 +82,22 @@ var escapeSequenceMap = map[rune] rune {
} }
// getEscapeSequence reads an escape sequence in a string or rune literal. // getEscapeSequence reads an escape sequence in a string or rune literal.
func (lexer *LexingOperation) getEscapeSequence () (result rune, err error) { func (lexer *LexingOperation) getEscapeSequence () (
result rune,
amountRead int,
err error,
) {
result, exists := escapeSequenceMap[lexer.char] result, exists := escapeSequenceMap[lexer.char]
if exists { if exists {
err = lexer.nextRune() err = lexer.nextRune()
amountRead ++
return return
} else if lexer.char >= '0' && lexer.char <= '7' { } else if lexer.char >= '0' && lexer.char <= '7' {
// octal escape sequence // octal escape sequence
number := string(lexer.char) number := string(lexer.char)
err = lexer.nextRune() err = lexer.nextRune()
amountRead ++
if err != nil { return } if err != nil { return }
for len(number) < 3 { for len(number) < 3 {
@@ -94,14 +106,15 @@ func (lexer *LexingOperation) getEscapeSequence () (result rune, err error) {
number += string(lexer.char) number += string(lexer.char)
err = lexer.nextRune() err = lexer.nextRune()
amountRead ++
if err != nil { return } if err != nil { return }
} }
if len(number) < 3 { if len(number) < 3 {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"octal escape sequence too short", "octal escape sequence too short",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
@@ -117,6 +130,7 @@ func (lexer *LexingOperation) getEscapeSequence () (result rune, err error) {
number := "" number := ""
err = lexer.nextRune() err = lexer.nextRune()
amountRead ++
if err != nil { return } if err != nil { return }
for len(number) < want { for len(number) < want {
@@ -128,24 +142,25 @@ func (lexer *LexingOperation) getEscapeSequence () (result rune, err error) {
number += string(lexer.char) number += string(lexer.char)
err = lexer.nextRune() err = lexer.nextRune()
amountRead ++
if err != nil { return } if err != nil { return }
} }
if len(number) < want { if len(number) < want {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"hex escape sequence too short ", "hex escape sequence too short ",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
parsedNumber, _ := strconv.ParseInt(number, 16, want * 4) parsedNumber, _ := strconv.ParseInt(number, 16, want * 4)
result = rune(parsedNumber) result = rune(parsedNumber)
} else { } else {
err = file.NewError ( err = infoerr.NewError (
lexer.file.Location(1), lexer.file.Location(1),
"unknown escape character " + "unknown escape character " +
string(lexer.char), file.ErrorKindError) string(lexer.char), infoerr.ErrorKindError)
return return
} }

View File

@@ -2,6 +2,7 @@ package lexer
import "fmt" import "fmt"
import "git.tebibyte.media/sashakoshka/arf/file" import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// TokenKind is an enum represzenting what role a token has. // TokenKind is an enum represzenting what role a token has.
type TokenKind int type TokenKind int
@@ -42,16 +43,28 @@ const (
TokenKindAt TokenKindAt
TokenKindExclamation TokenKindExclamation
TokenKindPercent TokenKindPercent
TokenKindPercentAssignment
TokenKindTilde TokenKindTilde
TokenKindTildeAssignment
TokenKindEqualTo
TokenKindNotEqualTo
TokenKindLessThanEqualTo
TokenKindLessThan TokenKindLessThan
TokenKindLShift TokenKindLShift
TokenKindLShiftAssignment
TokenKindGreaterThan TokenKindGreaterThan
TokenKindGreaterThanEqualTo
TokenKindRShift TokenKindRShift
TokenKindRShiftAssignment
TokenKindBinaryOr TokenKindBinaryOr
TokenKindBinaryOrAssignment
TokenKindLogicalOr TokenKindLogicalOr
TokenKindBinaryAnd TokenKindBinaryAnd
TokenKindBinaryAndAssignment
TokenKindLogicalAnd TokenKindLogicalAnd
TokenKindBinaryXor
TokenKindBinaryXorAssignment
) )
// Token represents a single token. It holds its location in the file, as well // Token represents a single token. It holds its location in the file, as well
@@ -89,8 +102,13 @@ func (token Token) Location () (location file.Location) {
} }
// NewError creates a new error at this token's location. // NewError creates a new error at this token's location.
func (token Token) NewError (message string, kind file.ErrorKind) (err file.Error) { func (token Token) NewError (
return token.location.NewError(message, kind) message string,
kind infoerr.ErrorKind,
) (
err infoerr.Error,
) {
return infoerr.NewError(token.location, message, kind)
} }
// Describe generates a textual description of the token to be used in debug // Describe generates a textual description of the token to be used in debug
@@ -165,24 +183,48 @@ func (tokenKind TokenKind) Describe () (description string) {
description = "Exclamation" description = "Exclamation"
case TokenKindPercent: case TokenKindPercent:
description = "Percent" description = "Percent"
case TokenKindPercentAssignment:
description = "PercentAssignment"
case TokenKindTilde: case TokenKindTilde:
description = "Tilde" description = "Tilde"
case TokenKindTildeAssignment:
description = "TildeAssignment"
case TokenKindEqualTo:
description = "EqualTo"
case TokenKindNotEqualTo:
description = "NotEqualTo"
case TokenKindLessThan: case TokenKindLessThan:
description = "LessThan" description = "LessThan"
case TokenKindLessThanEqualTo:
description = "LessThanEqualTo"
case TokenKindLShift: case TokenKindLShift:
description = "LShift" description = "LShift"
case TokenKindLShiftAssignment:
description = "LShiftAssignment"
case TokenKindGreaterThan: case TokenKindGreaterThan:
description = "GreaterThan" description = "GreaterThan"
case TokenKindGreaterThanEqualTo:
description = "GreaterThanEqualTo"
case TokenKindRShift: case TokenKindRShift:
description = "RShift" description = "RShift"
case TokenKindRShiftAssignment:
description = "RShiftAssignment"
case TokenKindBinaryOr: case TokenKindBinaryOr:
description = "BinaryOr" description = "BinaryOr"
case TokenKindBinaryOrAssignment:
description = "BinaryOrAssignment"
case TokenKindLogicalOr: case TokenKindLogicalOr:
description = "LogicalOr" description = "LogicalOr"
case TokenKindBinaryAnd: case TokenKindBinaryAnd:
description = "BinaryAnd" description = "BinaryAnd"
case TokenKindBinaryAndAssignment:
description = "BinaryAndAssignment"
case TokenKindLogicalAnd: case TokenKindLogicalAnd:
description = "LogicalAnd" description = "LogicalAnd"
case TokenKindBinaryXor:
description = "BinaryXor"
case TokenKindBinaryXorAssignment:
description = "BinaryXorAssignment"
} }
return return

View File

@@ -1,7 +1,7 @@
package parser package parser
import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/lexer" import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
var validArgumentStartTokens = []lexer.TokenKind { var validArgumentStartTokens = []lexer.TokenKind {
lexer.TokenKindName, lexer.TokenKindName,
@@ -37,7 +37,7 @@ func (parser *ParsingOperation) parseArgument () (argument Argument, err error)
err = parser.token.NewError ( err = parser.token.NewError (
"cannot use member selection in " + "cannot use member selection in " +
"a variable definition", "a variable definition",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }

View File

@@ -1,7 +1,7 @@
package parser package parser
import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/lexer" import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parse body parses the body of an arf file, after the metadata header. // parse body parses the body of an arf file, after the metadata header.
func (parser *ParsingOperation) parseBody () (err error) { func (parser *ParsingOperation) parseBody () (err error) {
@@ -21,13 +21,54 @@ func (parser *ParsingOperation) parseBody () (err error) {
parser.tree.dataSections[section.name] = section parser.tree.dataSections[section.name] = section
if err != nil { return } if err != nil { return }
case "type": case "type":
var section *TypeSection
section, err = parser.parseTypeSection()
if parser.tree.typeSections == nil {
parser.tree.typeSections =
make(map[string] *TypeSection)
}
parser.tree.typeSections[section.name] = section
if err != nil { return }
case "objt":
var section *ObjtSection
section, err = parser.parseObjtSection()
if parser.tree.objtSections == nil {
parser.tree.objtSections =
make(map[string] *ObjtSection)
}
parser.tree.objtSections[section.name] = section
if err != nil { return }
case "face": case "face":
var section *FaceSection
section, err = parser.parseFaceSection()
if parser.tree.faceSections == nil {
parser.tree.faceSections =
make(map[string] *FaceSection)
}
parser.tree.faceSections[section.name] = section
if err != nil { return }
case "enum": case "enum":
var section *EnumSection
section, err = parser.parseEnumSection()
if parser.tree.enumSections == nil {
parser.tree.enumSections =
make(map[string] *EnumSection)
}
parser.tree.enumSections[section.name] = section
if err != nil { return }
case "func": case "func":
var section *FuncSection
section, err = parser.parseFuncSection()
if parser.tree.funcSections == nil {
parser.tree.funcSections =
make(map[string] *FuncSection)
}
parser.tree.funcSections[section.name] = section
if err != nil { return }
default: default:
err = parser.token.NewError ( err = parser.token.NewError (
"unknown section type \"" + sectionType + "\"", "unknown section type \"" + sectionType + "\"",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
} }

View File

@@ -1,8 +1,8 @@
package parser package parser
import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/types" import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/lexer" import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parseData parses a data section. // parseData parses a data section.
func (parser *ParsingOperation) parseDataSection () ( func (parser *ParsingOperation) parseDataSection () (
@@ -94,9 +94,6 @@ func (parser *ParsingOperation) parseObjectInitializationValues () (
initializationValues ObjectInitializationValues, initializationValues ObjectInitializationValues,
err error, err error,
) { ) {
println("BEGIN")
defer println("END")
initializationValues.attributes = make(map[string] Argument) initializationValues.attributes = make(map[string] Argument)
baseIndent := 0 baseIndent := 0
@@ -116,8 +113,6 @@ func (parser *ParsingOperation) parseObjectInitializationValues () (
// do not parse any further if the indent has changed // do not parse any further if the indent has changed
if indent != baseIndent { break } if indent != baseIndent { break }
println("HIT")
// move on to the beginning of the line, which must contain // move on to the beginning of the line, which must contain
// a member initialization value // a member initialization value
err = parser.nextToken(lexer.TokenKindDot) err = parser.nextToken(lexer.TokenKindDot)
@@ -132,7 +127,7 @@ func (parser *ParsingOperation) parseObjectInitializationValues () (
err = parser.token.NewError ( err = parser.token.NewError (
"duplicate member \"" + name + "\" in object " + "duplicate member \"" + name + "\" in object " +
"member initialization", "member initialization",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
@@ -273,7 +268,7 @@ func (parser *ParsingOperation) parseType () (what Type, err error) {
default: default:
err = parser.token.NewError ( err = parser.token.NewError (
"unknown type qualifier \"" + qualifier + "\"", "unknown type qualifier \"" + qualifier + "\"",
file.ErrorKindError) infoerr.ErrorKindError)
return return
} }
@@ -294,8 +289,6 @@ func (parser *ParsingOperation) parseIdentifier () (
identifier.location = parser.token.Location() identifier.location = parser.token.Location()
for { for {
// TODO: eat up newlines and tabs after the dot, but not before
// it.
if !parser.token.Is(lexer.TokenKindName) { break } if !parser.token.Is(lexer.TokenKindName) { break }
identifier.trail = append ( identifier.trail = append (
@@ -306,6 +299,18 @@ func (parser *ParsingOperation) parseIdentifier () (
if err != nil { return } if err != nil { return }
if !parser.token.Is(lexer.TokenKindDot) { break } if !parser.token.Is(lexer.TokenKindDot) { break }
err = parser.nextToken()
if err != nil { return }
// allow the identifier to continue on to the next line if there
// is a line break right after the dot
for parser.token.Is(lexer.TokenKindNewline) ||
parser.token.Is(lexer.TokenKindIndent) {
err = parser.nextToken()
if err != nil { return }
}
} }
return return

38
parser/data_test.go Normal file
View File

@@ -0,0 +1,38 @@
package parser
import "testing"
func TestData (test *testing.T) {
checkTree ("../tests/parser/data",
`:arf
---
data ro integer:Int 3202
data ro integerArray16:{Int 16}
data ro integerArrayInitialized:{Int 16}
3948
293
293049
948
912
340
0
2304
0
4785
92
data ro integerArrayVariable:{Int ..}
data ro integerPointer:{Int}
data ro mutInteger:Int:mut 3202
data ro mutIntegerPointer:{Int}:mut
data ro nestedObject:Obj
.that
.bird2 123.8439
.bird3 9328.21348239
.this
.bird0 324
.bird1 "hello world"
data ro object:thing.thing.thing.thing
.that 2139
.this 324
`, test)
}

93
parser/enum.go Normal file
View File

@@ -0,0 +1,93 @@
package parser
import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
func (parser *ParsingOperation) parseEnumSection () (
section *EnumSection,
err error,
) {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
section = &EnumSection { location: parser.token.Location() }
// get permission
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
section.permission = parser.token.Value().(types.Permission)
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
section.name = parser.token.Value().(string)
// parse inherited type
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
section.what, err = parser.parseType()
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
// parse members
err = parser.parseEnumMembers(section)
if err != nil { return }
if len(section.members) == 0 {
infoerr.NewError (
section.location,
"defining an enum with no members",
infoerr.ErrorKindWarn).Print()
}
return
}
// parseEnumMembers parses a list of members for an enum section. Indentation
// level is assumed.
func (parser *ParsingOperation) parseEnumMembers (
into *EnumSection,
) (
err error,
) {
for {
// if we've left the block, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
member := EnumMember { }
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
member.location = parser.token.Location()
member.name = parser.token.Value().(string)
err = parser.nextToken()
if err != nil { return }
// parse default value
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
member.value, err = parser.parseInitializationValues(1)
into.members = append(into.members, member)
if err != nil { return }
} else {
member.value, err = parser.parseArgument()
into.members = append(into.members, member)
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
}
}
}

38
parser/enum_test.go Normal file
View File

@@ -0,0 +1,38 @@
package parser
import "testing"
func TestEnum (test *testing.T) {
checkTree ("../tests/parser/enum",
`:arf
---
enum ro AffrontToGod:{Int 4}
bird0
28394
9328
398
9
bird1
23
932832
398
2349
bird2
1
2
3
4
enum ro NamedColor:U32
red 16711680
green 65280
blue 255
enum ro Weekday:Int
sunday
monday
tuesday
wednesday
thursday
friday
saturday
`, test)
}

132
parser/face.go Normal file
View File

@@ -0,0 +1,132 @@
package parser
import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parseFaceSection parses an interface section.
func (parser *ParsingOperation) parseFaceSection () (
section *FaceSection,
err error,
) {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
section = &FaceSection {
location: parser.token.Location(),
behaviors: make(map[string] FaceBehavior),
}
// get permission
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
section.permission = parser.token.Value().(types.Permission)
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
section.name = parser.token.Value().(string)
// parse inherited interface
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
section.inherits, err = parser.parseIdentifier()
if err != nil { return }
err = parser.nextToken(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
// parse members
for {
// if we've left the block, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
// parse behavior
behaviorBeginning := parser.token.Location()
var behavior FaceBehavior
behavior, err = parser.parseFaceBehavior()
// add to section
_, exists := section.behaviors[behavior.name]
if exists {
err = infoerr.NewError (
behaviorBeginning,
"multiple behaviors named " + behavior.name +
" in this interface",
infoerr.ErrorKindError)
return
}
section.behaviors[behavior.name] = behavior
if err != nil { return }
}
return
}
// parseFaceBehavior parses a single interface behavior. Indentation level is
// assumed.
func (parser *ParsingOperation) parseFaceBehavior () (
behavior FaceBehavior,
err error,
) {
err = parser.expect(lexer.TokenKindIndent)
if err != nil { return }
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
behavior.name = parser.token.Value().(string)
err = parser.nextToken(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
for {
// if we've left the block, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 2 { return }
// get preceding symbol
err = parser.nextToken (
lexer.TokenKindGreaterThan,
lexer.TokenKindLessThan)
if err != nil { return }
kind := parser.token.Kind()
var declaration Declaration
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
declaration.name = parser.token.Value().(string)
// parse inherited type
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
declaration.what, err = parser.parseType()
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
if kind == lexer.TokenKindGreaterThan {
behavior.inputs = append (
behavior.inputs,
declaration)
} else {
behavior.outputs = append (
behavior.outputs,
declaration)
}
}
return
}

30
parser/func.go Normal file
View File

@@ -0,0 +1,30 @@
package parser
import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/lexer"
// import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parseFunc parses a function section.
func (parser *ParsingOperation) parseFuncSection () (
section *FuncSection,
err error,
) {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
section = &FuncSection { location: parser.token.Location() }
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
section.permission = parser.token.Value().(types.Permission)
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
section.name = parser.token.Value().(string)
err = parser.nextToken(lexer.TokenKindNewline)
if err != nil { return }
return
}

111
parser/func_test.go Normal file
View File

@@ -0,0 +1,111 @@
package parser
import "testing"
func TestFunc (test *testing.T) {
checkTree ("../tests/parser/func",
`:arf
---
func ro aBasicExternal
> someInput:Int:mut
< someOutput:Int 4
---
external
func ro bMethod
@ bird:{Bird}
> someInput:Int:mut
< someOutput:Int 4
---
external
func ro cBasicPhrases
---
[fn 329 983 09]
[fn 329 983 09]
[fn 329 983 091]
[fn [gn 329 983 091] 123]
func ro dArgumentTypes
---
[bird tree butterfly.wing "hello world" grass:{Int:mut 8}]
func ro eMath
[> x:Int]
[> y:Int]
[< z:Int]
[---]
[++ x]
[-- y]
[set z [+ [* 0392 00] 98 x [/ 9832 y] 930]]
[! true]
[~ 0b01]
[% 873 32]
[= 5 5]
[!= 4 4]
[<= 4 98]
[< 4 98]
[<< 0x0F 4]
[>= 98 4]
[> 98 4]
[>> 0xF0 4]
[| 0b01 0b10]
[& 0b110 0b011]
[&& true true]
[|| true false]
func ro fReturnDirection
< err:Error
---
[someFunc 498 2980 90] -> thing:Int err
[otherFunc] -> thing err:Error
[fn 329 983 091] -> thing:Int err
func ro gControlFlow
---
[if condition]
[something]
[if condition]
[something]
[elseif]
[otherThing]
[else]
[finalThing]
[while [< x 432]]
[something]
[switch value]
[: 324]
[something]
[: 93284]
otherThing
[: 9128 34738 7328]
multipleCases
[:]
[defaultThing]
[for index:Size element:Int someArray]
[something]
[someNextThing]
[justMakingSureBlockParsingWorks]
[if condition]
[if condition]
[nestedThing]
[else]
[otherThing]
[else]
[if condition]
[nestedThing]
[else]
[otherThing]
func hSetPhrase
---
[set x:Int 3]
[set y:{Int} [. x]]
[set z:{Int 8}]
398
9
2309
983
-2387
478
555
123
[set bird:Bird]
.that
.whenYou 99999
.this 324
`, test)
}

View File

@@ -1,7 +1,7 @@
package parser package parser
import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/lexer" import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parseMeta parsese the metadata header at the top of an arf file. // parseMeta parsese the metadata header at the top of an arf file.
func (parser *ParsingOperation) parseMeta () (err error) { func (parser *ParsingOperation) parseMeta () (err error) {
@@ -35,7 +35,7 @@ func (parser *ParsingOperation) parseMeta () (err error) {
default: default:
parser.token.NewError ( parser.token.NewError (
"unrecognized metadata field: " + field, "unrecognized metadata field: " + field,
file.ErrorKindError) infoerr.ErrorKindError)
} }
err = parser.nextToken(lexer.TokenKindNewline) err = parser.nextToken(lexer.TokenKindNewline)

14
parser/meta_test.go Normal file
View File

@@ -0,0 +1,14 @@
package parser
import "testing"
func TestMeta (test *testing.T) {
checkTree ("../tests/parser/meta",
`:arf
author "Sasha Koshka"
license "GPLv3"
require "someModule"
require "otherModule"
---
`, test)
}

127
parser/objt.go Normal file
View File

@@ -0,0 +1,127 @@
package parser
import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parseObjtSection parses an object type definition. This allows for structured
// types to be defined, and for member variables to be added and overridden.
func (parser *ParsingOperation) parseObjtSection () (
section *ObjtSection,
err error,
) {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
section = &ObjtSection { location: parser.token.Location() }
// get permission
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
section.permission = parser.token.Value().(types.Permission)
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
section.name = parser.token.Value().(string)
// parse inherited type
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
section.inherits, err = parser.parseIdentifier()
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
// parse members
err = parser.parseObjtMembers(section)
if err != nil { return }
if len(section.members) == 0 {
infoerr.NewError (
section.location,
"defining an object with no members",
infoerr.ErrorKindWarn).Print()
}
return
}
// parseObjtMembers parses a list of members for an object section. Indentation
// level is assumed.
func (parser *ParsingOperation) parseObjtMembers (
into *ObjtSection,
) (
err error,
) {
for {
// if we've left the block, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
// add member to object section
var member ObjtMember
member, err = parser.parseObjtMember()
into.members = append(into.members, member)
if err != nil { return }
}
}
// parseObjtMember parses a single member of an object section. Indentation
// level is assumed.
func (parser *ParsingOperation) parseObjtMember () (
member ObjtMember,
err error,
) {
// get permission
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
member.permission = parser.token.Value().(types.Permission)
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
member.name = parser.token.Value().(string)
// get type
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
member.what, err = parser.parseType()
if err != nil { return }
println(parser.token.Describe())
// if there is a bit width, get it
if parser.token.Is(lexer.TokenKindBinaryAnd) {
err = parser.nextToken(lexer.TokenKindUInt)
if err != nil { return }
member.bitWidth = parser.token.Value().(uint64)
err = parser.nextToken()
if err != nil { return }
}
// parse default value
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
member.defaultValue,
err = parser.parseInitializationValues(1)
if err != nil { return }
} else {
member.defaultValue, err = parser.parseArgument()
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
}
return
}

31
parser/objt_test.go Normal file
View File

@@ -0,0 +1,31 @@
package parser
import "testing"
func TestObjt (test *testing.T) {
checkTree ("../tests/parser/objt",
`:arf
---
objt ro Basic:Obj
ro that:Basic
ro this:Basic
objt ro BitFields:Obj
ro that:Int & 1
ro this:Int & 24 298
objt ro ComplexInit:Obj
ro whatever:{Int 3}
230984
849
394580
ro complex0:Bird
.that 98
.this 2
ro complex1:Bird
.that 98902
.this 235
ro basic:Int 87
objt ro Init:Obj
ro that:String "hello world"
ro this:Int 23
`, test)
}

View File

@@ -5,6 +5,7 @@ import "os"
import "path/filepath" import "path/filepath"
import "git.tebibyte.media/sashakoshka/arf/file" import "git.tebibyte.media/sashakoshka/arf/file"
import "git.tebibyte.media/sashakoshka/arf/lexer" import "git.tebibyte.media/sashakoshka/arf/lexer"
import "git.tebibyte.media/sashakoshka/arf/infoerr"
// ParsingOperation holds information about an ongoing parsing operation. // ParsingOperation holds information about an ongoing parsing operation.
type ParsingOperation struct { type ParsingOperation struct {
@@ -96,9 +97,9 @@ func (parser *ParsingOperation) expect (allowed ...lexer.TokenKind) (err error)
message += allowedItem.Describe() message += allowedItem.Describe()
} }
err = file.NewError ( err = infoerr.NewError (
parser.token.Location(), parser.token.Location(),
message, file.ErrorKindError) message, infoerr.ErrorKindError)
return return
} }

View File

@@ -1,6 +1,7 @@
package parser package parser
import "io" import "io"
import "strings"
import "testing" import "testing"
// import "git.tebibyte.media/sashakoshka/arf/types" // import "git.tebibyte.media/sashakoshka/arf/types"
@@ -10,9 +11,9 @@ func checkTree (modulePath string, correct string, test *testing.T) {
treeRunes := []rune(treeString) treeRunes := []rune(treeString)
test.Log("CORRECT TREE:") test.Log("CORRECT TREE:")
test.Log(correct) logWithLineNumbers(correct, test)
test.Log("WHAT WAS PARSED:") test.Log("WHAT WAS PARSED:")
test.Log(treeString) logWithLineNumbers(treeString, test)
if err != io.EOF && err != nil { if err != io.EOF && err != nil {
test.Log("returned error:") test.Log("returned error:")
@@ -63,49 +64,11 @@ func checkTree (modulePath string, correct string, test *testing.T) {
} }
} }
func TestMeta (test *testing.T) { func logWithLineNumbers (bigString string, test *testing.T) {
checkTree ("../tests/parser/meta", lines := strings.Split (
`:arf strings.Replace(bigString, "\t", " ", -1), "\n")
author "Sasha Koshka"
license "GPLv3"
require "someModule"
require "otherModule"
---
`, test)
}
func TestData (test *testing.T) { for index, line := range lines {
checkTree ("../tests/parser/data", test.Logf("%3d | %s", index + 1, line)
`:arf }
---
data wr integer:Int 3202
data wr integerArray16:{Int 16}
data wr integerArrayInitialized:{Int 16}
3948
293
293049
948
912
340
0
2304
0
4785
92
data wr integerArrayVariable:{Int ..}
data wr integerPointer:{Int}
data wr mutInteger:Int:mut 3202
data wr mutIntegerPointer:{Int}:mut
data wr nestedObject:Obj
.that
.bird2 123.8439
.bird3 9328.21348239
.this
.bird0 324
.bird1 "hello world"
data wr object:Obj
.that 2139
.this 324
`, test)
} }

View File

@@ -46,6 +46,26 @@ func (tree *SyntaxTree) ToString (indent int) (output string) {
output += doIndent(indent, "---\n") output += doIndent(indent, "---\n")
typeSectionKeys := sortMapKeysAlphabetically(tree.typeSections)
for _, name := range typeSectionKeys {
output += tree.typeSections[name].ToString(indent)
}
objtSectionKeys := sortMapKeysAlphabetically(tree.objtSections)
for _, name := range objtSectionKeys {
output += tree.objtSections[name].ToString(indent)
}
enumSectionKeys := sortMapKeysAlphabetically(tree.enumSections)
for _, name := range enumSectionKeys {
output += tree.enumSections[name].ToString(indent)
}
faceSectionKeys := sortMapKeysAlphabetically(tree.faceSections)
for _, name := range faceSectionKeys {
output += tree.faceSections[name].ToString(indent)
}
dataSectionKeys := sortMapKeysAlphabetically(tree.dataSections) dataSectionKeys := sortMapKeysAlphabetically(tree.dataSections)
for _, name := range dataSectionKeys { for _, name := range dataSectionKeys {
output += tree.dataSections[name].ToString(indent) output += tree.dataSections[name].ToString(indent)
@@ -246,3 +266,126 @@ func (section *DataSection) ToString (indent int) (output string) {
} }
return return
} }
func (section *TypeSection) ToString (indent int) (output string) {
output += doIndent (
indent,
"type ",
section.permission.ToString(), " ",
section.name, ":",
section.inherits.ToString())
isComplexInitialization :=
section.defaultValue.kind == ArgumentKindObjectInitializationValues ||
section.defaultValue.kind == ArgumentKindArrayInitializationValues
if section.defaultValue.value == nil {
output += "\n"
} else if isComplexInitialization {
output += "\n"
output += section.defaultValue.ToString(indent + 1, true)
} else {
output += " " + section.defaultValue.ToString(0, false)
output += "\n"
}
return
}
func (member ObjtMember) ToString (indent int) (output string) {
output += doIndent(indent)
output += member.permission.ToString() + " "
output += member.name + ":"
output += member.what.ToString()
if member.bitWidth > 0 {
output += fmt.Sprint(" & ", member.bitWidth)
}
isComplexInitialization :=
member.defaultValue.kind == ArgumentKindObjectInitializationValues ||
member.defaultValue.kind == ArgumentKindArrayInitializationValues
if member.defaultValue.value == nil {
output += "\n"
} else if isComplexInitialization {
output += "\n"
output += member.defaultValue.ToString(indent + 1, true)
} else {
output += " " + member.defaultValue.ToString(0, false)
output += "\n"
}
return
}
func (section *ObjtSection) ToString (indent int) (output string) {
output += doIndent (
indent,
"objt ",
section.permission.ToString(), " ",
section.name, ":",
section.inherits.ToString(), "\n")
for _, member := range section.members {
output += member.ToString(indent + 1)
}
return
}
func (section *EnumSection) ToString (indent int) (output string) {
output += doIndent (
indent,
"enum ",
section.permission.ToString(), " ",
section.name, ":",
section.what.ToString(), "\n")
for _, member := range section.members {
output += doIndent(indent + 1, member.name)
isComplexInitialization :=
member.value.kind == ArgumentKindObjectInitializationValues ||
member.value.kind == ArgumentKindArrayInitializationValues
if member.value.value == nil {
output += "\n"
} else if isComplexInitialization {
output += "\n"
output += member.value.ToString(indent + 2, true)
} else {
output += " " + member.value.ToString(0, false)
output += "\n"
}
}
return
}
func (section *FaceSection) ToString (indent int) (output string) {
output += doIndent (
indent,
"face ",
section.permission.ToString(), " ",
section.name, ":",
section.inherits.ToString(), "\n")
for _, name := range sortMapKeysAlphabetically(section.behaviors) {
behavior := section.behaviors[name]
output += behavior.ToString(indent + 1)
}
return
}
func (behavior *FaceBehavior) ToString (indent int) (output string) {
output += doIndent(indent, behavior.name, "\n")
for _, inputItem := range behavior.inputs {
output += doIndent(indent + 1, "> ", inputItem.ToString(), "\n")
}
for _, outputItem := range behavior.outputs {
output += doIndent(indent + 1, "< ", outputItem.ToString(), "\n")
}
return
}

View File

@@ -11,7 +11,12 @@ type SyntaxTree struct {
author string author string
requires []string requires []string
typeSections map[string] *TypeSection
objtSections map[string] *ObjtSection
enumSections map[string] *EnumSection
faceSections map[string] *FaceSection
dataSections map[string] *DataSection dataSections map[string] *DataSection
funcSections map[string] *FuncSection
} }
// Identifier represents a chain of arguments separated by a dot. // Identifier represents a chain of arguments separated by a dot.
@@ -156,6 +161,89 @@ type DataSection struct {
name string name string
what Type what Type
value Argument
permission types.Permission permission types.Permission
value Argument
}
// TypeSection represents a blind type definition.
type TypeSection struct {
location file.Location
name string
inherits Type
permission types.Permission
defaultValue Argument
}
// ObjtMember represents a part of an object type definition.
type ObjtMember struct {
location file.Location
name string
what Type
bitWidth uint64
permission types.Permission
defaultValue Argument
}
// ObjtSection represents an object type definition
type ObjtSection struct {
location file.Location
name string
inherits Identifier
permission types.Permission
members []ObjtMember
}
type EnumMember struct {
location file.Location
name string
value Argument
}
// EnumSection represents an enumerated type section.
type EnumSection struct {
location file.Location
name string
what Type
permission types.Permission
members []EnumMember
}
// FaceBehavior represents a behavior of an interface section.
type FaceBehavior struct {
location file.Location
name string
inputs []Declaration
outputs []Declaration
}
// FaceSection represents an interface type section.
type FaceSection struct {
location file.Location
name string
inherits Identifier
permission types.Permission
behaviors map[string] FaceBehavior
}
// Block represents a scoped/indented block of code.
// TODO: blocks will not directly nest. nested blocks will be stored as a part
// of certain control flow statements.
type Block []Phrase
// FuncSection represents a function section.
type FuncSection struct {
location file.Location
name string
permission types.Permission
receiver *Declaration
inputs []Declaration
outputs []Declaration
root *Block
} }

53
parser/type.go Normal file
View File

@@ -0,0 +1,53 @@
package parser
import "git.tebibyte.media/sashakoshka/arf/types"
import "git.tebibyte.media/sashakoshka/arf/lexer"
// import "git.tebibyte.media/sashakoshka/arf/infoerr"
// parseTypeSection parses a blind type definition, meaning it can inherit from
// anything including primitives, but cannot define structure.
func (parser *ParsingOperation) parseTypeSection () (
section *TypeSection,
err error,
) {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
section = &TypeSection { location: parser.token.Location() }
// get permission
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
section.permission = parser.token.Value().(types.Permission)
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
section.name = parser.token.Value().(string)
// parse inherited type
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
section.inherits, err = parser.parseType()
if err != nil { return }
// parse default values
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
section.defaultValue, err = parser.parseInitializationValues(0)
if err != nil { return }
} else {
section.defaultValue, err = parser.parseArgument()
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
}
return
}

17
parser/type_test.go Normal file
View File

@@ -0,0 +1,17 @@
package parser
import "testing"
func TestType (test *testing.T) {
checkTree ("../tests/parser/type",
`:arf
---
type ro Basic:Int
type ro BasicInit:Int 6
type ro IntArray:{Int ..}
type ro IntArrayInit:{Int 3}
3298
923
92
`, test)
}

View File

@@ -1,3 +1,3 @@
:arf :arf
--- rw -> -349820394 932748397 239485.37520 "hello world!\n" 'E' helloWorld:.,..[]{} --- rw -> -349820394 932748397 239485.37520 "hello world!\n" 'E' helloWorld:.,..[]{}
+ - ++ -- * / @ ! % ~ < << > >> | || & && + - ++ -- * / @ ! % %= ~ ~= = != < <= << <<= > >= >> >>= | |= || & &= && ^ ^=

View File

@@ -0,0 +1,2 @@
:arf
'aaaaaaa'

View File

@@ -0,0 +1,2 @@
:arf
hello;

View File

@@ -0,0 +1,2 @@
:arf
"\g"

View File

@@ -1,19 +1,19 @@
:arf :arf
--- ---
data wr integer:Int 3202 data ro integer:Int 3202
data wr mutInteger:Int:mut 3202 data ro mutInteger:Int:mut 3202
data wr integerPointer:{Int} data ro integerPointer:{Int}
data wr mutIntegerPointer:{Int}:mut data ro mutIntegerPointer:{Int}:mut
data wr integerArray16:{Int 16} data ro integerArray16:{Int 16}
data wr integerArrayVariable:{Int ..} data ro integerArrayVariable:{Int ..}
data wr integerArrayInitialized:{Int 16} data ro integerArrayInitialized:{Int 16}
3948 293 293049 948 912 3948 293 293049 948 912
340 0 2304 0 4785 92 340 0 2304 0 4785 92
@@ -22,11 +22,13 @@ data wr integerArrayInitialized:{Int 16}
# data wr mutIntegerPointerInit:{Int}:mut [& integer] # data wr mutIntegerPointerInit:{Int}:mut [& integer]
data wr object:Obj # TODO: maybe test identifiers somewhere else?
data ro object:thing.thing.
thing.thing
.this 324 .this 324
.that 2139 .that 2139
data wr nestedObject:Obj data ro nestedObject:Obj
.this .this
.bird0 324 .bird0 324
.bird1 "hello world" .bird1 "hello world"
@@ -35,7 +37,7 @@ data wr nestedObject:Obj
.bird3 9328.21348239 .bird3 9328.21348239
# func rr main # func ro main
# --- # ---
# # TODO: set should be a special case, checking under itself for object # # TODO: set should be a special case, checking under itself for object
# member initialization args. it should also check for args in general # member initialization args. it should also check for args in general

View File

@@ -0,0 +1,30 @@
:arf
---
enum ro Weekday:Int
sunday
monday
tuesday
wednesday
thursday
friday
saturday
enum ro NamedColor:U32
red 0xFF0000
green 0x00FF00
blue 0x0000FF
enum ro AffrontToGod:{Int 4}
bird0
28394 9328
398 9
bird1
23 932832
398
2349
bird2
1
2
3
4

View File

@@ -0,0 +1,15 @@
:arf
---
face ro ReadWriter:Face
write
> data:{Byte ..}
< wrote:Int
< err:Error
read
> into:{Byte ..}
< read:Int
< err:Error
face ro Destroyer:Face
destroy

134
tests/parser/func/main.arf Normal file
View File

@@ -0,0 +1,134 @@
:arf
---
func ro aBasicExternal
> someInput:Int:mut
< someOutput:Int 4
---
external
func ro bMethod
@ bird:{Bird}
> someInput:Int:mut
< someOutput:Int 4
---
external
func ro cBasicPhrases
---
fn 329 983 09
[fn 329 983 09]
[fn
329
983
091]
fn [gn
329 983
091] 123
func ro dArgumentTypes
---
[bird tree butterfly.wing "hello world"
grass:{Int:mut 8}]
func ro eMath
> x:Int
> y:Int
< z:Int
---
++ x
-- y
set z [+ [* 0392 00] 98 x [/ 9832 y] 930]
# TODO: need tokens ~=
! true
~ 0b01
# ~= x
% 873 32
= 5 5
!= 4 4
<= 4 98
< 4 98
<< 0x0F 4
# <<= x 4
>= 98 4
> 98 4
>> 0xF0 4
# >>= x 4
| 0b01 0b10
# |= x 0b10
& 0b110 0b011
# &= x 0b011
&& true true
|| true false
func ro fReturnDirection
< err:Error
---
someFunc 498 2980 90 -> thing:Int err
otherFunc -> thing err:Error
[fn
329
983
091] -> thing:Int err
func ro gControlFlow
---
if condition
something
if condition
something
elseif
[otherThing]
else
finalThing
while [< x 432]
something
switch value
: 324
something
[: 93284]
otherThing
: 9128 34738 7328
multipleCases
:
[defaultThing]
for index:Size element:Int someArray
something
someNextThing
justMakingSureBlockParsingWorks
[if condition]
if condition
nestedThing
else
otherThing
else
if condition
nestedThing
else
otherThing
func hSetPhrase
---
set x:Int 3
# TODO: this should be the "location of" phrase. update other things to
# match.
set y:{Int} [. x]
set z:{Int 8}
398 9 2309 983 -2387
478 555 123
set bird:Bird
.that
.whenYou 99999
.this 324

View File

@@ -0,0 +1,25 @@
:arf
---
objt ro Basic:Obj
ro that:Basic
ro this:Basic
objt ro BitFields:Obj
ro that:Int & 1
ro this:Int & 24 298
objt ro Init:Obj
ro that:String "hello world"
ro this:Int 23
objt ro ComplexInit:Obj
ro whatever:{Int 3}
230984
849 394580
ro complex0:Bird
.that 98
.this 2
ro complex1:Bird
.that 98902
.this 235
ro basic:Int 87

View File

@@ -0,0 +1,10 @@
:arf
---
type ro Basic:Int
type ro BasicInit:Int 6
type ro IntArray:{Int ..}
type ro IntArrayInit:{Int 3}
3298 923 92

View File

@@ -1,48 +1,63 @@
package types package types
type Mode int type Permission int
const ( const (
ModeNone = iota // Displays as: pv
ModeRead //
ModeWrite // Other modules cannot access the section or member.
PermissionPrivate Permission = iota
// Displays as: ro
//
// Other modules can access the section or member, but can only read its
// value. It is effectively immutable.
//
// Data sections, member variables, etc: The value can be read by other
// modules but not altered by them
//
// Functions: The function can be called by other modules.
//
// Methods: The method can be called by other modules, but cannot be
// overriden by a type defined in another module inheriting from this
// method's reciever.
PermissionReadOnly
// Displays as: rw
//
// Other modules cannot only access the section or member's value, but
// can alter it. It is effectively mutable.
//
// Data sections, member variables, etc: The value can be read and
// altered at will by other modules.
//
// Functions: This permission cannot be applied to non-method functions.
//
// Methods: The method can be called by other modules, and overridden by
// types defined in other modules inheriting from the method's reciever.
PermissionReadWrite
) )
type Permission struct { // PermissionFrom creates a new permission value from the specified text. If the
Internal Mode // input text was not valid, the function returns false for worked. Otherwise,
External Mode // it returns true.
} func PermissionFrom (data string) (permission Permission, worked bool) {
worked = true
func ModeFrom (char rune) (mode Mode) { switch data {
switch (char) { case "pv": permission = PermissionPrivate
case 'n': mode = ModeNone case "ro": permission = PermissionReadOnly
case 'r': mode = ModeRead case "rw": permission = PermissionReadWrite
case 'w': mode = ModeWrite default: worked = false
} }
return
}
func PermissionFrom (data string) (permission Permission) {
if len(data) != 2 { return }
permission.Internal = ModeFrom(rune(data[0]))
permission.External = ModeFrom(rune(data[1]))
return
}
func (mode Mode) ToString () (output string) {
switch mode {
case ModeNone: output = "n"
case ModeRead: output = "r"
case ModeWrite: output = "w"
}
return return
} }
// ToString converts the permission value into a string.
func (permission Permission) ToString () (output string) { func (permission Permission) ToString () (output string) {
output += permission.Internal.ToString() switch permission {
output += permission.External.ToString() case PermissionPrivate: output = "pv"
case PermissionReadOnly: output = "ro"
case PermissionReadWrite: output = "rw"
}
return return
} }