Compare commits

..

121 Commits

Author SHA1 Message Date
befe371e4f What is anything anymore 2022-10-24 01:16:07 -04:00
dfa7d31163 Methods do not collide and are properly retrievable 2022-10-23 02:24:34 -04:00
5641220986 Some stuff on the func sections babey 2022-10-23 01:16:40 -04:00
a9a1c6ae9b Data sections can now have rw perms 2022-10-23 00:07:14 -04:00
f5ad652d68 Add external value to func and data sections 2022-10-19 22:54:24 -04:00
5520994072 Added some basic structs and interfaces for phrase parsing 2022-10-19 22:46:20 -04:00
82093865b0 wip 2022-10-19 13:27:48 -04:00
8b4fee50ab Literals call themselves literals instead of arg 2022-10-18 23:39:07 -04:00
6dfde851e5 Add cast phrase kind to parser 2022-10-18 23:32:15 -04:00
308996f059 Pointers and dynamic arrays are accounted for 2022-10-18 17:34:37 -04:00
ae0765b8f4 Add non-basic types to type section test case 2022-10-18 15:00:42 -04:00
2a1a48d9c5 Added basic test case for functions 2022-10-18 15:00:17 -04:00
3067b64f47 Altered semantics of fetchNodeFromIdentifier 2022-10-17 17:24:25 -04:00
fd9b1b3d11 wip 2022-10-17 15:41:26 -04:00
82c868f0c1 Test case for data sections 2022-10-17 01:48:37 -04:00
d6c8f57a81 Fixed type checking for string literals
The string builtin was incorrectly described, and
StringLiteral.canBePassedAs was checking the type actual instead of
the type points in the case of a reducible type.
2022-10-17 01:40:00 -04:00
a5b1385012 Some error message tweaks 2022-10-16 02:58:31 -04:00
f57637d7fc Untested data section analysis 2022-10-16 02:53:37 -04:00
85bc6064f2 Object member values must be unique 2022-10-16 02:41:41 -04:00
098acafab1 No segfaulty 2022-10-16 02:07:25 -04:00
e885af997d Store enum default value as argument, error on empty enum 2022-10-16 02:04:38 -04:00
500184c4ab Enum values are auto-filled 2022-10-14 20:06:11 -04:00
2669a04857 Enum member names and values must be unique 2022-10-14 04:00:05 -04:00
dd29f69213 Added methods to check if a type is a number 2022-10-13 20:52:49 -04:00
12755d3f85 Enum analysis works 2022-10-13 18:48:38 -04:00
a1faf68cce Untested enum analysis 2022-10-13 18:02:35 -04:00
c047a1438a Restrict type sections to only inherit from other type sections 2022-10-13 16:33:32 -04:00
bfdca9ed16 Ok yeah there I fixed it 2022-10-13 16:11:12 -04:00
5463435fae Untested rules for pulling types from other section kinds 2022-10-13 15:08:47 -04:00
d5687d7b0e Uhhh 2022-10-13 13:30:11 -04:00
561e893327 Fixed test case to include new stuff 2022-10-13 02:26:05 -04:00
b8693af68b Added some permission checks 2022-10-13 02:20:47 -04:00
ae50fab159 Small parser and ToString fixes 2022-10-13 00:18:32 -04:00
c290b3a3d7 Properly analyze member types 2022-10-13 00:01:49 -04:00
f817894b49 Added untested object member analysis 2022-10-12 23:25:21 -04:00
aaf268d0d1 Permissions of sections in other modules are respected 2022-10-12 15:48:22 -04:00
15fa122547 Parser no longer returns io.EOF when done parsing 2022-10-12 14:27:26 -04:00
ccc303d93c Fixed meta test case to account for new relative path resolution 2022-10-12 14:23:09 -04:00
3662b2e298 Fixed type section parsing at EOF 2022-10-12 14:21:19 -04:00
b2fadd2fd3 Analyzer now no longer attempts to analyze an invalid tree 2022-10-12 13:14:53 -04:00
5d27befb6f Fixed require paths
Previously, relative require paths would be resolved based on the
current working directory. They are now resolved based on the path
of the module.
2022-10-12 13:11:36 -04:00
150fc0dd4a Error when something inherits a non-existent type 2022-10-12 13:06:59 -04:00
d4146ac6ce Member analysis stub 2022-10-12 13:05:19 -04:00
83fbd38c75 Parser now sets type member location 2022-10-12 12:39:00 -04:00
32faab8b36 Added method to figure out if a type has a particular member 2022-10-12 02:57:18 -04:00
2a8476282e Get rid of parser/default-values.go 2022-10-12 01:02:17 -04:00
3a9855fe88 Create translator stub 2022-10-12 00:58:58 -04:00
291aad8aad Made documentation a bit better 2022-10-12 00:48:55 -04:00
1196bb3801 Privated lexingOperation 2022-10-12 00:00:34 -04:00
b92a3dcba3 Privated parsingOperation 2022-10-11 23:57:27 -04:00
89b432c6fd Privated analysisOperation 2022-10-11 23:53:38 -04:00
1924892ab6 Fixed isSingular 2022-10-11 23:51:55 -04:00
7581541ff5 Added a locatable node trait 2022-10-11 18:31:37 -04:00
cd670d05c5 Made node traits file for semantic table nodes 2022-10-11 18:12:53 -04:00
67c94fb0e8 Special function for type checking and returning an error in one fell swop 2022-10-11 18:03:44 -04:00
d74f3a40dd Errors encountered while analyzing are no longer ignored 2022-10-11 17:13:37 -04:00
41724a7e03 Added untested type mismatch error reporting thing 2022-10-11 17:04:18 -04:00
020833c4c6 Added isSingular to method (this is cool) 2022-10-11 16:20:12 -04:00
b8c57d5a56 StringLiteral.canBePassedAs allows variable arrays 2022-10-11 15:09:44 -04:00
942a52f7c6 Merge pull request 'Add dereference parsing' (#17) from parse-dereferences into main
Reviewed-on: arf/arf#17
2022-10-11 17:37:40 +00:00
cdebedb839 Fixed test case 2022-10-11 13:36:11 -04:00
49e834860f Fixed dereference parsing 2022-10-11 13:35:11 -04:00
a7588f7416 Added untested dereference parsing 2022-10-11 13:31:17 -04:00
fae8bedfa9 Dereference parsing stub 2022-10-11 11:31:44 -04:00
1cd7511ced Add dereference to tree 2022-10-11 11:23:50 -04:00
746fda6843 No we don't want that 2022-10-11 11:15:16 -04:00
56a3ca509a Rewrote func test case 2022-10-11 11:12:37 -04:00
f58fe4cf26 Added editorconfig file 2022-10-05 17:35:55 -04:00
48b53e48f3 Merge pull request 'remove-rune-literal' (#8) from remove-rune-literal into main
Reviewed-on: arf/arf#8
2022-10-05 20:21:43 +00:00
6d5bb59712 Removed runes from analyzer ez 2022-10-04 17:25:05 -04:00
c7e6c9299a Removed runes from the test case 2022-10-04 17:13:08 -04:00
b6d3c04acd Removed runes from parser 2022-10-04 17:07:31 -04:00
c42f4f46fc Removed excess data in rune test case 2022-10-04 16:51:53 -04:00
6a72cc9f12 Some test case fixes for the lexer 2022-10-04 16:47:32 -04:00
7af98d1c6f Removed rune literals from analyzer 2022-10-04 16:35:00 -04:00
5c286cf955 Added some useful type checking thigns to literals 2022-10-04 16:19:26 -04:00
e2947eab8c Added permissions to analyzed sections 2022-10-01 17:21:17 -04:00
07540e0abc Added more stuff to type test case 2022-10-01 17:12:43 -04:00
0d53d7ad32 Table ToString outputs sections in alphabetical order 2022-09-30 03:46:29 -04:00
47cb89206a Analyzer now understands type section default values 2022-09-30 00:04:28 -04:00
d117e2727c Analyzer attempts to find the source of types 2022-09-29 22:54:32 -04:00
1300f87cb5 when you 2022-09-29 20:28:51 -04:00
52727a1996 Nevermind this way is far better 2022-09-29 18:25:56 -04:00
8ead560bfb Test case matches absolute paths on system 2022-09-29 18:14:25 -04:00
bb4a5472e1 Less gooooo! 2022-09-29 18:09:52 -04:00
ed4c9aa0d2 Iterator actually advances now 2022-09-29 17:34:51 -04:00
b2cc45abec Merge pull request 'revert-complexity' (#7) from revert-complexity into main
Reviewed-on: arf/arf#7
2022-09-29 20:06:53 +00:00
3e1acdc74a Face test is now passed 2022-09-29 15:52:14 -04:00
2ceb3f8174 Interfaces get parsed properly (i think) 2022-09-29 15:45:25 -04:00
4811ea5257 Wrote interface test case 2022-09-29 11:28:12 -04:00
51428e3755 Pass skim test 2022-09-29 11:15:58 -04:00
8b88e1d440 Sort ToString of requires 2022-09-29 11:06:45 -04:00
a31c975c9d Func section tostrings and parses output values properly 2022-09-29 11:02:37 -04:00
7374de2633 Func section tostring fixes 2022-09-29 02:43:24 -04:00
94967d25e2 Removed let phrases
We don't need them anymore
2022-09-29 02:37:14 -04:00
290f8799cf Parser parses function outputs 2022-09-29 02:29:35 -04:00
5f0e4697b5 Made some updates to func test case 2022-09-29 02:18:47 -04:00
011c968192 Type section now passes test 2022-09-29 02:13:22 -04:00
23072b5476 Type members actually get ToString'd now 2022-09-29 02:10:58 -04:00
06f9b5b71c parseType sets the type kind in all cases 2022-09-29 02:04:44 -04:00
16fe6afdff Fixed segfault in Type.ToString 2022-09-29 02:03:19 -04:00
6c02e45e2e Untested type section parsing yay 2022-09-29 02:01:31 -04:00
58af5f3f15 Put type members back where they were 2022-09-28 11:07:39 -04:00
1bd886fea0 Rewrote type section correct test 2022-09-28 10:36:29 -04:00
f4c079786b Made data test cases consistent with eachother 2022-09-27 18:46:17 -04:00
3a38465368 Fixed DataSection/TypeSection.ToString 2022-09-27 18:43:40 -04:00
6c70e9c50d Fixed data section parsing 2022-09-27 18:42:00 -04:00
93bc742339 Fixed enum test case to match ToString 2022-09-27 18:23:23 -04:00
3a4ccdda10 Fixed List.ToString for non breakline 2022-09-27 18:18:38 -04:00
8dd90e1c6b Implemented list parsing 2022-09-27 18:03:27 -04:00
37a216a53d Fixed enum parsing
I think I did anyways. It wont parse either way becasue I haven't implemented
lists.
2022-09-27 17:36:39 -04:00
38409db74b Updated enum correct test case 2022-09-27 17:05:13 -04:00
870a33f4c2 Untested, updated enum parsing 2022-09-27 17:00:44 -04:00
26f887dfcc tree-tostring is free of compiler errors 2022-09-27 16:13:02 -04:00
cd9de16338 Removed previous code from type-notation.go 2022-09-27 15:48:47 -04:00
4228d2b4cf Operators can no longer be arguments 2022-09-27 14:48:05 -04:00
1ed612b3d1 Values are now properly referred to as arguments 2022-09-27 14:26:02 -04:00
48325e224b Renamed method recievers from trait to node in node-traits 2022-09-27 14:18:46 -04:00
873d6c89b1 Rewrote parser test case input files 2022-09-27 14:17:41 -04:00
c4101dcd33 More tree changes 2022-09-27 14:17:03 -04:00
edd4b39642 Parser tree changes 2022-09-26 18:28:21 -04:00
72 changed files with 2892 additions and 1336 deletions

5
.editorconfig Normal file
View File

@@ -0,0 +1,5 @@
[*]
end_of_line = lf
insert_final_newline = true
indent_style = tab
indent_size = 8

View File

@@ -1,14 +1,25 @@
/*
Package analyzer implements a semantic analyzer for the ARF language. In it,
there is a function called Analyze which takes in a module path and returns a
table of sections representative of that module. The result of this operation is
not cached.
The section table returned by Analyze can be expected to be both syntactically
correct and semantically sound.
This package automatically invokes the parser and lexer packages.
*/
package analyzer
import "os"
import "fmt"
import "path/filepath"
// import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
// AnalysisOperation holds information about an ongoing analysis operation.
type AnalysisOperation struct {
// analysisOperation holds information about an ongoing analysis operation.
type analysisOperation struct {
sectionTable SectionTable
modulePath string
@@ -17,15 +28,16 @@ type AnalysisOperation struct {
currentTree parser.SyntaxTree
}
// Analyze performs a semantic analyisys on the module specified by path, and
// returns a SectionTable that can be translated into C.
// Analyze performs a semantic analysis on the module specified by path, and
// returns a SectionTable that can be translated into C. The result of this is
// not cached.
func Analyze (modulePath string, skim bool) (table SectionTable, err error) {
if modulePath[0] != '/' {
cwd, _ := os.Getwd()
modulePath = filepath.Join(cwd, modulePath)
}
analyzer := AnalysisOperation {
analyzer := analysisOperation {
sectionTable: make(SectionTable),
modulePath: modulePath,
}
@@ -37,15 +49,18 @@ func Analyze (modulePath string, skim bool) (table SectionTable, err error) {
// analyze performs an analysis operation given the state of the operation
// struct.
func (analyzer *AnalysisOperation) analyze () (err error) {
tree, err := parser.Fetch(analyzer.modulePath, false)
func (analyzer *analysisOperation) analyze () (err error) {
var tree parser.SyntaxTree
tree, err = parser.Fetch(analyzer.modulePath, false)
if err != nil { return }
sections := tree.Sections()
for !sections.End() {
_, err = analyzer.fetchSection(locator {
modulePath: analyzer.modulePath,
name: sections.Value().Name(),
name: sections.Key(),
})
if err != nil { return err }
sections.Next()
}
@@ -56,13 +71,16 @@ func (analyzer *AnalysisOperation) analyze () (err error) {
// been analyzed, it analyzes it first. If the section does not actually exist,
// a nil section is returned. When this happens, an error should be created on
// whatever syntax tree node "requested" the section be analyzed.
func (analyzer *AnalysisOperation) fetchSection (
func (analyzer *analysisOperation) fetchSection (
where locator,
) (
section Section,
err error,
) {
var exists bool
section, exists = analyzer.resolvePrimitive(where)
if exists { return }
section, exists = analyzer.sectionTable[where]
if exists { return }
@@ -75,7 +93,7 @@ func (analyzer *AnalysisOperation) fetchSection (
return
}
var parsedSection = tree.LookupSection(where.name)
var parsedSection = tree.LookupSection("", where.name)
if parsedSection == nil {
section = nil
return
@@ -94,7 +112,7 @@ func (analyzer *AnalysisOperation) fetchSection (
analyzer.currentTree = previousTree
} ()
// TODO: analyze section. have analysis methods work on currentPosition
// analyze section. have analysis methods work on currentPosition
// and currentSection.
//
// while building an analyzed section, add it to the section
@@ -106,57 +124,187 @@ func (analyzer *AnalysisOperation) fetchSection (
section, err = analyzer.analyzeTypeSection()
if err != nil { return}
case parser.EnumSection:
section, err = analyzer.analyzeEnumSection()
if err != nil { return}
case parser.FaceSection:
case parser.DataSection:
section, err = analyzer.analyzeDataSection()
if err != nil { return}
case parser.FuncSection:
section, err = analyzer.analyzeFuncSection()
if err != nil { return}
}
return
}
// fetchSectionFromIdentifier is like fetchSection, but takes in an identifier
// referring to a section and returns the section. This works within the context
// of whatever module is currently being analyzed. The identifier in question
// may have more items than 1 or 2, but those will be ignored. This method
// "consumes" items from the identifier, it will return an identifier without
// those items.
func (analyzer *AnalysisOperation) fetchSectionFromIdentifier (
which parser.Identifier,
// resolvePrimitive checks to see if the locator is in the current module, and
// refers to a primitive. If it does, it returns a pointer to that primitive
// and true for exists. If it doesn't, it returns nil and false. this method is
// only to be used by analysisOperation.fetchSection.
func (analyzer *analysisOperation) resolvePrimitive (
where locator,
) (
section Section,
bitten parser.Identifier,
err error,
exists bool,
) {
bitten = which
item := bitten.Bite()
// primitives are scoped as if they are contained within the current
// module, so if the location refers to something outside of the current
// module, it is definetly not referring to a primitive.
if where.modulePath != analyzer.currentPosition.modulePath {
return
}
exists = true
switch where.name {
case "Int": section = &PrimitiveInt
case "UInt": section = &PrimitiveUInt
case "I8": section = &PrimitiveI8
case "I16": section = &PrimitiveI16
case "I32": section = &PrimitiveI32
case "I64": section = &PrimitiveI64
case "U8": section = &PrimitiveU8
case "U16": section = &PrimitiveU16
case "U32": section = &PrimitiveU32
case "U64": section = &PrimitiveU64
case "Obj": section = &PrimitiveObj
// case "Face": section = &PrimitiveFace
// case "Func": section = &PrimitiveFunc
case "String": section = &BuiltInString
default:
exists = false
}
return
}
// TODO: make this method a generalized "get this from an identifier in context
// of the current scope" method. have it return various things like sections,
// variables, functions, members, methods, etc. have it return an any.
// if no node could be found, return an error saying entity not found. if the
// node is private, return an error.
//
// when new things are defined, they should not be allowed to shadow anything
// else in above scopes. nevertheless, the method should search in this order:
//
// 1. search scopes starting with closest -> farthest
// 2. if first part of identifier is a require, get section from other module
// 3. search for section in current module
// fetchNodeFromIdentifier is like fetchSection, but takes in an identifier
// referring to any node accessible within the current scope and returns it.
// This method works within the current scope and current module. This method
// consumes items from the input identifier, and outputs the items which it did
// not consume.
func (analyzer *analysisOperation) fetchNodeFromIdentifier (
which parser.Identifier,
) (
node any,
bitten parser.Identifier,
err error,
) {
var item string
item, bitten = which.Bite()
// TODO: search scopes for variables
// the identifier must be referring to a section
var external bool
path, exists := analyzer.currentTree.ResolveRequire(item)
if exists {
// we have our module path, so get the section name
item = bitten.Bite()
item, bitten = bitten.Bite()
external = true
} else {
// that wasn't a module name, so the module path must be the our
// that wasn't a module name, so the module path must be our
// current one
path = analyzer.currentPosition.modulePath
}
// attempt to get section
var section Section
section, err = analyzer.fetchSection (locator {
name: item,
modulePath: path,
})
node = section
if err != nil { return }
if section == nil {
// return error if nothing mentioned in the identifier is accessible
if node == nil {
err = which.NewError (
"section \"" + item + "\" does not exist",
"can't find anything called \"" + item + "\" within " +
"current scope",
infoerr.ErrorKindError,
)
return
}
// return error if the section is private
if external && section.Permission() == types.PermissionPrivate {
err = which.NewError(
"this section is private, and cannot be used " +
"outside of its module",
infoerr.ErrorKindError)
return
}
return
}
// addSection adds a section to the analyzer's section table. If a section with
// that name already exists, it panics because the parser should not have given
// that to us.
func (analyzer *analysisOperation) addSection (section Section) {
_, exists := analyzer.sectionTable[section.locator()]
if exists {
panic (
"invalid state: duplicate section " +
section.locator().ToString())
}
analyzer.sectionTable[section.locator()] = section
return
}
// typeCheck checks to see if source can fit as an argument into a slot of type
// destination. If it can, it retunrs nil. If it can't, it returns an error
// explaining why.
func (analyzer *analysisOperation) typeCheck (
source Argument,
destination Type,
) (
err error,
) {
if !source.canBePassedAs(destination) {
err = source.NewError (
typeMismatchErrorMessage (
source.What(),
destination),
infoerr.ErrorKindError)
}
return
}
// inCurrentModule returns whether or not the specified section resides within
// the current module.
func (analyzer *analysisOperation) inCurrentModule (
section Section,
) (
inCurrent bool,
){
inCurrent =
section.locator().modulePath ==
analyzer.currentPosition.modulePath
return
}
// TODO: make a method of analyzer that, given a name, searches through all
// accessible scopes and returns the thing the name references. when analyzing
// a function, the analyzer should remember a trail of scopes.
// doIndent perfroms a fmt.Sprint operation on input, indenting the string. This
// does not add a trailing newline.
func doIndent (indent int, input ...any) (output string) {
for index := 0; index < indent; index ++ {
output += "\t"

View File

@@ -1,61 +1,117 @@
package analyzer
import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/parser"
// import "git.tebibyte.media/arf/arf/infoerr"
import "git.tebibyte.media/arf/arf/infoerr"
// Argument represents a value that can be placed anywhere a value goes. This
// allows things like phrases being arguments to other phrases.
type Argument interface {
// Phrase
// List
// Dereference
// Subscript
// Object
// Array
// Variable
// IntLiteral
// UIntLiteral
// FloatLiteral
// StringLiteral
// RuneLiteral
What () (what Type)
Location () (location file.Location)
NewError (message string, kind infoerr.ErrorKind) (err error)
ToString (indent int) (output string)
Equals (value any) (equal bool)
Value () (value any)
Resolve () (constant Argument, err error)
canBePassedAs (what Type) (allowed bool)
}
// phrase
// is what
// list
// is what
// dereference
// if length is greater than 1
// length is 1
// is what (ignore length)
// else
// is points of reduced of what
// variable
// is what
// int
// primitive is basic signed | float
// length is 1
// uint
// primitive is basic signed | unsigned | float
// length is 1
// float
// primitive is basic float
// length is 1
// string
// primitive is basic signed | unsigned | float
// length is equal
// or
// reduced is variable array
// reduced points to signed | unsigned | float
// length is 1
// analyzeArgument analyzes an argument
func (analyzer AnalysisOperation) analyzeArgument (
func (analyzer analysisOperation) analyzeArgument (
inputArgument parser.Argument,
) (
outputArgument Argument,
err error,
) {
switch inputArgument.Kind() {
case parser.ArgumentKindNil:
panic("invalid state: attempt to analyze nil argument")
case parser.ArgumentKindPhrase:
// TODO
case parser.ArgumentKindDereference:
// TODO
case parser.ArgumentKindSubscript:
case parser.ArgumentKindObjectDefaultValues:
case parser.ArgumentKindArrayDefaultValues:
case parser.ArgumentKindList:
// TODO
case parser.ArgumentKindIdentifier:
// TODO
case parser.ArgumentKindDeclaration:
// TODO
case parser.ArgumentKindInt:
outputArgument = IntLiteral {
value: inputArgument.Value().(int64),
locatable: locatable {
location: inputArgument.Location(),
},
}
case parser.ArgumentKindUInt:
outputArgument = UIntLiteral {
value: inputArgument.Value().(uint64),
locatable: locatable {
location: inputArgument.Location(),
},
}
case parser.ArgumentKindFloat:
outputArgument = FloatLiteral {
value: inputArgument.Value().(float64),
locatable: locatable {
location: inputArgument.Location(),
},
}
case parser.ArgumentKindString:
case parser.ArgumentKindRune:
case parser.ArgumentKindOperator:
outputArgument = StringLiteral {
value: inputArgument.Value().(string),
locatable: locatable {
location: inputArgument.Location(),
},
}
}
return
}

31
analyzer/block.go Normal file
View File

@@ -0,0 +1,31 @@
package analyzer
import "git.tebibyte.media/arf/arf/parser"
// Block represents a scoped block of phrases.
type Block struct {
locatable
phrases []Phrase
// TODO: create a scope struct and embed it
}
func (block Block) ToString (indent int) (output string) {
output += doIndent(indent, "block\n")
// TODO: variables
// TODO: phrases
return
}
// analyzeBlock analyzes a scoped block of phrases.
// TODO: have a way to "start out" with a list of variables for things like
// arguments, and declarations inside of control flow statements
func (analyzer *analysisOperation) analyzeBlock (
inputBlock parser.Block,
) (
block Block,
err error,
) {
return
}

View File

@@ -0,0 +1,8 @@
package analyzer
func typeMismatchErrorMessage (source Type, destination Type) (message string) {
message += source.Describe()
message += " cannot be used as "
message += destination.Describe()
return
}

75
analyzer/data-section.go Normal file
View File

@@ -0,0 +1,75 @@
package analyzer
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
// DataSection represents a global variable section.
type DataSection struct {
sectionBase
what Type
argument Argument
external bool
}
// ToString returns all data stored within the data section, in string form.
func (section DataSection) ToString (indent int) (output string) {
output += doIndent(indent, "dataSection ")
output += section.permission.ToString() + " "
output += section.where.ToString()
output += "\n"
output += section.what.ToString(indent + 1)
if section.argument != nil {
output += section.argument.ToString(indent + 1)
}
return
}
// analyzeDataSection analyzes a data section.
func (analyzer analysisOperation) analyzeDataSection () (
section Section,
err error,
) {
outputSection := DataSection { }
outputSection.where = analyzer.currentPosition
section = &outputSection
analyzer.addSection(section)
inputSection := analyzer.currentSection.(parser.DataSection)
outputSection.location = analyzer.currentSection.Location()
outputSection.permission = inputSection.Permission()
// get inherited type
outputSection.what, err = analyzer.analyzeType(inputSection.Type())
if err != nil { return }
// data sections are only allowed to inherit type, enum, and face sections
_, inheritsFromTypeSection := outputSection.what.actual.(*TypeSection)
_, inheritsFromEnumSection := outputSection.what.actual.(*EnumSection)
// _, inheritsFromFaceSection := outputSection.what.actual.(*FaceSection)
if !inheritsFromTypeSection && inheritsFromEnumSection {
err = inputSection.Type().NewError (
"type sections can only inherit from type, enum, and " +
"face sections",
infoerr.ErrorKindError)
return
}
if inputSection.External() {
outputSection.external = true
} else if !inputSection.Argument().Nil() {
outputSection.argument,
err = analyzer.analyzeArgument(inputSection.Argument())
if err != nil { return }
// type check default value
err = analyzer.typeCheck (
outputSection.argument,
outputSection.what)
if err != nil { return }
}
outputSection.complete = true
return
}

View File

@@ -0,0 +1,20 @@
package analyzer
import "testing"
func TestDataSection (test *testing.T) {
checkTree ("../tests/analyzer/dataSection", false,
`dataSection ro ../tests/analyzer/dataSection.aBasicInt
type 1 basic Int
uintLiteral 5
dataSection ro ../tests/analyzer/dataSection.bRune
type 1 basic Int
stringLiteral 'A'
dataSection ro ../tests/analyzer/dataSection.cString
type 1 basic String
stringLiteral 'A very large bird'
dataSection ro ../tests/analyzer/dataSection.dCharBuffer
type 32 basic U8
stringLiteral 'A very large bird` + "\000" + `'
`, test)
}

212
analyzer/enum-section.go Normal file
View File

@@ -0,0 +1,212 @@
package analyzer
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
// EnumSection represents an enumerated type section.
type EnumSection struct {
sectionBase
what Type
members []EnumMember
argument Argument
}
// EnumMember is a member of an enumerated type.
type EnumMember struct {
locatable
name string
argument Argument
}
// ToString returns all data stored within the member, in string form.
func (member EnumMember) ToString (indent int) (output string) {
output += doIndent(indent, "member ", member.name, "\n")
if member.argument != nil {
output += member.argument.ToString(indent + 1)
}
return
}
// ToString returns all data stored within the type section, in string form.
func (section EnumSection) ToString (indent int) (output string) {
output += doIndent(indent, "enumSection ")
output += section.permission.ToString() + " "
output += section.where.ToString()
output += "\n"
if section.argument != nil {
output += section.argument.ToString(indent + 1)
}
output += section.what.ToString(indent + 1)
for _, member := range section.members {
output += member.ToString(indent + 1)
}
return
}
// analyzeEnumSection analyzes an enumerated type section.
func (analyzer analysisOperation) analyzeEnumSection () (
section Section,
err error,
) {
outputSection := EnumSection { }
outputSection.where = analyzer.currentPosition
section = &outputSection
analyzer.addSection(section)
inputSection := analyzer.currentSection.(parser.EnumSection)
outputSection.location = analyzer.currentSection.Location()
if inputSection.Permission() == types.PermissionReadWrite {
err = inputSection.NewError (
"read-write (rw) permission not understood in this " +
"context, try read-only (ro)",
infoerr.ErrorKindError)
return
}
outputSection.permission = inputSection.Permission()
// get inherited type
outputSection.what, err = analyzer.analyzeType(inputSection.Type())
if err != nil { return }
// if the inherited type is a single number, we take note of that here
// because it will allow us to do things like automatically fill in
// member values if they are not specified.
isNumeric :=
outputSection.what.isNumeric() &&
outputSection.what.isSingular()
// enum sections are only allowed to inherit from type sections
_, inheritsFromTypeSection := outputSection.what.actual.(*TypeSection)
if !inheritsFromTypeSection {
err = inputSection.Type().NewError (
"enum sections can only inherit from other type " +
"sections",
infoerr.ErrorKindError)
return
}
// analyze members
for index := 0; index < inputSection.Length(); index ++ {
inputMember := inputSection.Item(index)
outputMember := EnumMember { }
outputMember.location = inputMember.Location()
outputMember.name = inputMember.Name()
if !inputMember.Argument().Nil() {
outputMember.argument,
err = analyzer.analyzeArgument(inputMember.Argument())
if err != nil { return }
// attempt to resolve the argument to a single constant
// literal
outputMember.argument, err =
outputMember.argument.Resolve()
if err != nil { return }
// type check value
err = analyzer.typeCheck (
outputMember.argument,
outputSection.what)
if err != nil { return }
} else if !isNumeric {
// non-numeric enums must have filled in values
err = inputMember.NewError (
"member value must be specified manually for " +
"non-numeric enums",
infoerr.ErrorKindError)
return
}
for _, compareMember := range outputSection.members {
if compareMember.name == outputMember.name {
err = inputMember.NewError (
"enum member names must be unique",
infoerr.ErrorKindError)
return
}
if outputMember.argument == nil { continue }
if compareMember.argument == nil { continue }
if compareMember.argument.Equals (
outputMember.argument.Value(),
) {
err = inputMember.NewError (
"enum member values must be unique",
infoerr.ErrorKindError)
return
}
}
outputSection.members = append (
outputSection.members,
outputMember)
}
// fill in members that do not have values
if isNumeric {
for index, fillInMember := range outputSection.members {
if fillInMember.argument != nil { continue }
max := uint64(0)
for _, compareMember := range outputSection.members {
compareValue := compareMember.argument
switch compareValue.(type) {
case IntLiteral:
number := uint64 (
compareValue.(IntLiteral).value)
if number > max {
max = number
}
case UIntLiteral:
number := uint64 (
compareValue.(UIntLiteral).value)
if number > max {
max = number
}
case FloatLiteral:
number := uint64 (
compareValue.(FloatLiteral).value)
if number > max {
max = number
}
case nil:
// do nothing
default:
panic (
"invalid state: illegal " +
"argument type while " +
"attempting to fill in enum " +
"member value for " +
fillInMember.name + " in " +
outputSection.location.Describe())
}
}
// fill in
fillInMember.argument = UIntLiteral {
locatable: fillInMember.locatable,
value: max + 1,
}
outputSection.members[index] = fillInMember
}
}
if len(outputSection.members) < 1 {
err = outputSection.NewError (
"cannot create an enum with no members",
infoerr.ErrorKindError)
return
}
outputSection.argument = outputSection.members[0].argument
outputSection.complete = true
return
}

View File

@@ -0,0 +1,47 @@
package analyzer
import "testing"
func TestEnumSection (test *testing.T) {
checkTree ("../tests/analyzer/enumSection", false,
`enumSection ro ../tests/analyzer/enumSection.aWeekday
uintLiteral 1
type 1 basic Int
member sunday
uintLiteral 1
member monday
uintLiteral 2
member tuesday
uintLiteral 3
member wednesday
uintLiteral 4
member thursday
uintLiteral 5
member friday
uintLiteral 6
member saturday
uintLiteral 7
typeSection ro ../tests/analyzer/enumSection.bColor
type 1 basic U32
enumSection ro ../tests/analyzer/enumSection.cNamedColor
uintLiteral 16711680
type 1 basic bColor
member red
uintLiteral 16711680
member green
uintLiteral 65280
member blue
uintLiteral 255
enumSection ro ../tests/analyzer/enumSection.dFromFarAway
uintLiteral 5
type 1 basic dInheritFromOther
member bird
uintLiteral 5
member bread
uintLiteral 4
typeSection ro ../tests/analyzer/typeSection/required.aBasic
type 1 basic Int
typeSection ro ../tests/analyzer/typeSection.dInheritFromOther
type 1 basic aBasic
`, test)
}

63
analyzer/func-section.go Normal file
View File

@@ -0,0 +1,63 @@
package analyzer
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
// FuncSection represents a type definition section.
type FuncSection struct {
sectionBase
root Block
external bool
}
// ToString returns all data stored within the function section, in string form.
func (section FuncSection) ToString (indent int) (output string) {
output += doIndent(indent, "funcSection ")
output += section.permission.ToString() + " "
output += section.where.ToString()
output += "\n"
// TODO: arguments
output += section.root.ToString(indent + 1)
return
}
// analyzeFuncSection analyzes a function section.
func (analyzer *analysisOperation) analyzeFuncSection () (
section Section,
err error,
) {
outputSection := FuncSection { }
outputSection.where = analyzer.currentPosition
section = &outputSection
analyzer.addSection(section)
inputSection := analyzer.currentSection.(parser.FuncSection)
outputSection.location = analyzer.currentSection.Location()
// TODO: do not do this if it is a method
if inputSection.Permission() == types.PermissionReadWrite {
err = inputSection.NewError (
"read-write (rw) permission not understood in this " +
"context, try read-only (ro)",
infoerr.ErrorKindError)
return
}
outputSection.permission = inputSection.Permission()
// TODO: analyze inputs and outputs and reciever
if inputSection.External() {
outputSection.external = true
} else {
outputSection.root, err = analyzer.analyzeBlock(inputSection.Root())
if err != nil { return }
// TODO: analyze root block if not nil
}
return
}

View File

@@ -0,0 +1,19 @@
package analyzer
import "testing"
func TestFuncSection (test *testing.T) {
checkTree ("../tests/analyzer/funcSection", false,
`typeSection ro ../tests/analyzer/funcSection.aCString
type 1 pointer {
type 1 basic U8
}
funcSection ro ../tests/analyzer/funcSection.bArbitrary
block
arbitraryPhrase
command 'puts'
castPhrase
type aCString
arg string 'hellorld` + "\000" + `'
`, test)
}

22
analyzer/list.go Normal file
View File

@@ -0,0 +1,22 @@
package analyzer
// import "git.tebibyte.media/arf/arf/parser"
// import "git.tebibyte.media/arf/arf/infoerr"
type List struct {
// TODO: length of what must be set to length of arguments
what Type
arguments []Argument
}
func (list List) ToString (indent int) (output string) {
// TODO
panic("TODO")
// return
}
func (list List) canBePassedAs (what Type) (allowed bool) {
// TODO
panic("TODO")
// return
}

View File

@@ -2,39 +2,219 @@ package analyzer
import "fmt"
type IntLiteral int64
type UIntLiteral uint64
type FloatLiteral float64
type StringLiteral string
type RuneLiteral rune
// IntLiteral represents a constant signed integer value.
type IntLiteral struct {
locatable
value int64
}
// UIntLiteral represents a constant unsigned itneger value.
type UIntLiteral struct {
locatable
value uint64
}
// FloatLiteral represents a constant floating point value.
type FloatLiteral struct {
locatable
value float64
}
// StringLiteral represents a constant text value.
type StringLiteral struct {
locatable
value string
}
// ToString outputs the data in the argument as a string.
func (literal IntLiteral) ToString (indent int) (output string) {
output += doIndent(indent, fmt.Sprint("arg int ", literal, "\n"))
output += doIndent(indent, fmt.Sprint("intLiteral ", literal.value, "\n"))
return
}
// What returns the type of the argument
func (literal IntLiteral) What () (what Type) {
what.actual = &PrimitiveI64
what.length = 1
return
}
// Equals returns whether the literal is equal to the specified value.
func (literal IntLiteral) Equals (value any) (equal bool) {
equal = literal.value == value
return
}
// Value returns the literal's value
func (literal IntLiteral) Value () (value any) {
value = literal.value
return
}
// Resolve resolves the argument to a constant literal, which in this case is
// trivial because the literal is already constant.
func (literal IntLiteral) Resolve () (constant Argument, err error) {
constant = literal
return
}
// canBePassedAs returns true if this literal can be implicitly cast to the
// specified type, and false if it can't.
func (literal IntLiteral) canBePassedAs (what Type) (allowed bool) {
// can be passed to singular types that are signed numbers at a
// primitive level.
allowed =
what.isSingular() &&
what.isSignedNumeric()
return
}
// ToString outputs the data in the argument as a string.
func (literal UIntLiteral) ToString (indent int) (output string) {
output += doIndent(indent, fmt.Sprint("arg uint ", literal, "\n"))
output += doIndent(indent, fmt.Sprint("uintLiteral ", literal.value, "\n"))
return
}
// What returns the type of the argument
func (literal UIntLiteral) What () (what Type) {
what.actual = &PrimitiveU64
what.length = 1
return
}
// Equals returns whether the literal is equal to the specified value.
func (literal UIntLiteral) Equals (value any) (equal bool) {
equal = literal.value == value
return
}
// Value returns the literal's value
func (literal UIntLiteral) Value () (value any) {
value = literal.value
return
}
// canBePassedAs returns true if this literal can be implicitly cast to the
// specified type, and false if it can't.
func (literal UIntLiteral) canBePassedAs (what Type) (allowed bool) {
// can be passed to singular types that are numbers at a primitive level.
allowed =
what.isSingular() &&
what.isNumeric()
return
}
// Resolve resolves the argument to a constant literal, which in this case is
// trivial because the literal is already constant.
func (literal UIntLiteral) Resolve () (constant Argument, err error) {
constant = literal
return
}
// What returns the type of the argument
func (literal FloatLiteral) What () (what Type) {
what.actual = &PrimitiveF64
what.length = 1
return
}
// ToString outputs the data in the argument as a string.
func (literal FloatLiteral) ToString (indent int) (output string) {
output += doIndent(indent, fmt.Sprint("arg float ", literal, "\n"))
output += doIndent(indent, fmt.Sprint("floatLiteral ", literal.value, "\n"))
return
}
// Equals returns whether the literal is equal to the specified value.
func (literal FloatLiteral) Equals (value any) (equal bool) {
equal = literal.value == value
return
}
// Value returns the literal's value
func (literal FloatLiteral) Value () (value any) {
value = literal.value
return
}
// Resolve resolves the argument to a constant literal, which in this case is
// trivial because the literal is already constant.
func (literal FloatLiteral) Resolve () (constant Argument, err error) {
constant = literal
return
}
// canBePassedAs returns true if this literal can be implicitly cast to the
// specified type, and false if it can't.
func (literal FloatLiteral) canBePassedAs (what Type) (allowed bool) {
// must be a singlular value
if !what.isSingular() { return }
// can be passed to types that are floats at a primitive level.
primitive := what.underlyingPrimitive()
switch primitive {
case
&PrimitiveF64,
&PrimitiveF32:
allowed = true
}
return
}
// What returns the type of the argument
func (literal StringLiteral) What () (what Type) {
what.actual = &BuiltInString
what.length = 1
return
}
// ToString outputs the data in the argument as a string.
func (literal StringLiteral) ToString (indent int) (output string) {
output += doIndent(indent, fmt.Sprint("arg string \"", literal, "\"\n"))
output += doIndent(indent, fmt.Sprint("stringLiteral '", literal.value, "'\n"))
return
}
// ToString outputs the data in the argument as a string.
func (literal RuneLiteral) ToString (indent int) (output string) {
output += doIndent(indent, fmt.Sprint("arg rune '", literal, "'\n"))
// Equals returns whether the literal is equal to the specified value.
func (literal StringLiteral) Equals (value any) (equal bool) {
equal = literal.value == value
return
}
// Value returns the literal's value
func (literal StringLiteral) Value () (value any) {
value = literal.value
return
}
// Resolve resolves the argument to a constant literal, which in this case is
// trivial because the literal is already constant.
func (literal StringLiteral) Resolve () (constant Argument, err error) {
constant = literal
return
}
// canBePassedAs returns true if this literal can be implicitly cast to the
// specified type, and false if it can't.
func (literal StringLiteral) canBePassedAs (what Type) (allowed bool) {
// can be passed to types that are numbers at a primitive level, or
// types that can be reduced to a variable array pointing to numbers at
// a primitive level.
// we don't check the length of what, becasue when setting a static
// array to a string literal, excess data will be cut off (and if it is
// shorter, the excess space will be filled with zeros).
reduced, worked := what.reduce()
if worked {
// if the type was reduced to a non-basic type, only pass to
// singular dynamic arrays.
if !what.isSingular() { return }
if reduced.kind != TypeKindVariableArray { return }
what = reduced
allowed = what.points.isNumeric()
} else {
allowed = what.isNumeric()
}
return
}

78
analyzer/node-traits.go Normal file
View File

@@ -0,0 +1,78 @@
package analyzer
import "path/filepath"
import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/infoerr"
// locatable allows a tree node to have a location.
type locatable struct {
location file.Location
}
// Location returns the location of the node.
func (node locatable) Location () (location file.Location) {
location = node.location
return
}
// NewError creates a new error at the node's location.
func (node locatable) NewError (
message string,
kind infoerr.ErrorKind,
) (
err error,
) {
err = infoerr.NewError(node.location, message, kind)
return
}
// sectionBase is a struct that all sections must embed.
type sectionBase struct {
where locator
complete bool
permission types.Permission
locatable
}
// Name returns the name of the section.
func (section sectionBase) Name () (name string) {
name = section.where.name
return
}
// ModulePath returns the full path of the module the section came from.
func (section sectionBase) ModulePath () (path string) {
path = section.where.modulePath
return
}
// ModuleName returns the name of the module where the section came from.
func (section sectionBase) ModuleName () (name string) {
name = filepath.Base(section.where.modulePath)
return
}
// Complete returns wether the section has been completed.
func (section sectionBase) Complete () (complete bool) {
complete = section.complete
return
}
// Permission returns the permission of the section.
func (section sectionBase) Permission () (permission types.Permission) {
permission = section.permission
return
}
// locator returns the module path and name of the section.
func (section sectionBase) locator () (where locator) {
where = section.where
return
}
// phraseBase is a struct that all phrases must embed.
type phraseBase struct {
locatable
returnsTo []Argument
}

37
analyzer/phrase.go Normal file
View File

@@ -0,0 +1,37 @@
package analyzer
import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
type Phrase interface {
// Provided by phraseBase
Location () (location file.Location)
NewError (message string, kind infoerr.ErrorKind) (err error)
// Must be implemented by each individual phrase
ToString (indent int) (output string)
}
type ArbitraryPhrase struct {
phraseBase
command string
arguments []Argument
}
type CastPhrase struct {
phraseBase
command Argument
arguments []Argument
}
// TODO more phrases lol
func (analyzer *analysisOperation) analyzePhrase (
inputPhrase parser.Phrase,
) (
phrase Phrase,
err error,
) {
return
}

View File

@@ -2,22 +2,62 @@ package analyzer
// This is a global, cannonical list of primitive and built-in types.
var PrimitiveInt = createPrimitive("Int", Type {})
var PrimitiveUInt = createPrimitive("UInt", Type {})
var PrimitiveI8 = createPrimitive("I8", Type {})
var PrimitiveI16 = createPrimitive("I16", Type {})
var PrimitiveI32 = createPrimitive("I32", Type {})
var PrimitiveI64 = createPrimitive("I64", Type {})
var PrimitiveU8 = createPrimitive("U8", Type {})
var PrimitiveU16 = createPrimitive("U16", Type {})
var PrimitiveU32 = createPrimitive("U32", Type {})
var PrimitiveU64 = createPrimitive("U64", Type {})
var PrimitiveObjt = createPrimitive("Objt", Type {})
var PrimitiveFace = createPrimitive("Face", Type {})
// PrimitiveF32 is a 32 bit floating point primitive.
var PrimitiveF32 = createPrimitive("F32", Type { length: 1 })
// PrimitiveF64 is a 64 bit floating point primitive.
var PrimitiveF64 = createPrimitive("F64", Type { length: 1 })
// PrimitiveInt is a signed integer word primitive.
var PrimitiveInt = createPrimitive("Int", Type { length: 1 })
// PrimitiveUInt is an unsigned integer word primitive.
var PrimitiveUInt = createPrimitive("UInt", Type { length: 1 })
// PrimitiveI8 is a signed 8 bit integer primitive.
var PrimitiveI8 = createPrimitive("I8", Type { length: 1 })
// PrimitiveI16 is a signed 16 bit integer primitive.
var PrimitiveI16 = createPrimitive("I16", Type { length: 1 })
// PrimitiveI32 is a signed 32 bit integer primitive.
var PrimitiveI32 = createPrimitive("I32", Type { length: 1 })
// PrimitiveI64 is a signed 64 bit integer primitive.
var PrimitiveI64 = createPrimitive("I64", Type { length: 1 })
// PrimitiveI8 is an unsigned 8 bit integer primitive.
var PrimitiveU8 = createPrimitive("U8", Type { length: 1 })
// PrimitiveI16 is an unsigned 16 bit integer primitive.
var PrimitiveU16 = createPrimitive("U16", Type { length: 1 })
// PrimitiveI32 is an unsigned 32 bit integer primitive.
var PrimitiveU32 = createPrimitive("U32", Type { length: 1 })
// PrimitiveI64 is an unsigned 64 bit integer primitive.
var PrimitiveU64 = createPrimitive("U64", Type { length: 1 })
// PrimitiveObj is a blank object primitive.
var PrimitiveObj = createPrimitive("Obj", Type { length: 1 })
// TODO: make these two be interface sections
// PrimitiveFace is a blank interface primitive. It accepts any value.
// var PrimitiveFace = createPrimitive("Face", Type {})
// PrimitiveFunc is a blank function interface primitive. It is useless.
// var PrimitiveFunc = createPrimitive("Func", Type {})
// BuiltInString is a built in string type. It is a dynamic array of UTF-32
// codepoints.
var BuiltInString = createPrimitive("String", Type {
actual: PrimitiveU32,
points: &Type {
actual: &PrimitiveU32,
length: 1,
},
kind: TypeKindVariableArray,
length: 1,
})
// createPrimitive provides a quick way to construct a primitive for the above

View File

@@ -1,6 +1,11 @@
package analyzer
import "os"
import "sort"
import "path/filepath"
import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/infoerr"
// locator uniquely identifies a section in the section table.
type locator struct {
@@ -9,7 +14,13 @@ type locator struct {
}
func (where locator) ToString () (output string) {
output += where.modulePath + "." + where.name
cwd, _ := os.Getwd()
modulePath, err := filepath.Rel(cwd, where.modulePath)
if err != nil {
panic("cant get relative path: " + err.Error())
}
output += modulePath + "." + where.name
return
}
@@ -19,12 +30,48 @@ type SectionTable map[locator] Section
// ToString returns the data stored in the table as a string.
func (table SectionTable) ToString (indent int) (output string) {
for _, section := range table {
sortedKeys := make(locatorArray, len(table))
index := 0
for key, _ := range table {
sortedKeys[index] = key
index ++
}
sort.Sort(sortedKeys)
for _, name := range sortedKeys {
section := table[name]
output += section.ToString(indent)
}
return
}
}
// locatorArray holds a sortable array of locators
type locatorArray []locator
// Len returns the length of the locator array
func (array locatorArray) Len () (length int) {
length = len(array)
return
}
// Less returns whether item at index left is less than item at index right.
func (array locatorArray) Less (left, right int) (less bool) {
leftLocator := array[left]
rightLocator := array[right]
less =
leftLocator.modulePath + leftLocator.name <
rightLocator.modulePath + rightLocator.name
return
}
// Swap swaps the elments at indices left and right.
func (array locatorArray) Swap (left, right int) {
temp := array[left]
array[left] = array[right]
array[right] = temp
}
// Section is a semantically analyzed section.
type Section interface {
@@ -33,37 +80,11 @@ type Section interface {
Complete () (complete bool)
ModulePath () (path string)
ModuleName () (path string)
Permission () (permission types.Permission)
Location () (location file.Location)
NewError (message string, kind infoerr.ErrorKind) (err error)
locator () (where locator)
// Must be implemented by each individual section
ToString (indent int) (output string)
}
// sectionBase is a struct that all sections must embed.
type sectionBase struct {
where locator
complete bool
}
// Name returns the name of the section.
func (section sectionBase) Name () (name string) {
name = section.where.name
return
}
// ModulePath returns the full path of the module the section came from.
func (section sectionBase) ModulePath () (path string) {
path = section.where.modulePath
return
}
// ModuleName returns the name of the module where the section came from.
func (section sectionBase) ModuleName () (name string) {
name = filepath.Base(section.where.modulePath)
return
}
// Complete returns wether the section has been completed.
func (section sectionBase) Complete () (complete bool) {
complete = section.complete
return
}

View File

@@ -1,5 +1,6 @@
package analyzer
import "fmt"
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
@@ -8,34 +9,285 @@ import "git.tebibyte.media/arf/arf/infoerr"
type TypeSection struct {
sectionBase
what Type
complete bool
argument Argument
members []ObjectMember
}
// ObjectMember is a member of an object type.
type ObjectMember struct {
locatable
name string
bitWidth uint64
permission types.Permission
what Type
argument Argument
}
// ToString returns all data stored within the member, in string form.
func (member ObjectMember) ToString (indent int) (output string) {
output += doIndent (
indent, "member ",
member.permission.ToString(), " ",
member.name)
if member.bitWidth > 0 {
output += fmt.Sprint(" width ", member.bitWidth)
}
output += "\n"
output += member.what.ToString(indent + 1)
if member.argument != nil {
output += member.argument.ToString(indent + 1)
}
return
}
// ToString returns all data stored within the type section, in string form.
func (section TypeSection) ToString (indent int) (output string) {
output += doIndent(indent, "typeSection ", section.where.ToString(), "\n")
output += doIndent(indent, "typeSection ")
output += section.permission.ToString() + " "
output += section.where.ToString()
output += "\n"
output += section.what.ToString(indent + 1)
if section.argument != nil {
output += section.argument.ToString(indent + 1)
}
for _, member := range section.members {
output += member.ToString(indent + 1)
}
return
}
// Member returns the membrs ksdn ,mn ,mxc lkzxjclkjxzc l,mnzc .,zxmn.,zxmc
// IT RECURSES!
func (section TypeSection) Member (
name string,
) (
member ObjectMember,
exists bool,
) {
switch section.what.kind {
case TypeKindBasic:
for _, currentMember := range section.members {
if currentMember.name == name {
member = currentMember
exists = true
break
}
}
if !exists {
if section.what.actual == nil { return }
actual, isTypeSection := section.what.actual.(*TypeSection)
if !isTypeSection { return }
member, exists = actual.Member(name)
}
case TypeKindPointer:
points := section.what.points
if points == nil { return }
actual, isTypeSection := points.actual.(*TypeSection)
if !isTypeSection { return }
member, exists = actual.Member(name)
}
return
}
// analyzeTypeSection analyzes a type section.
func (analyzer AnalysisOperation) analyzeTypeSection () (
func (analyzer analysisOperation) analyzeTypeSection () (
section Section,
err error,
) {
inputSection := analyzer.currentSection.(parser.TypeSection)
if inputSection.Permission() == types.PermissionReadWrite {
err = inputSection.NewError (
"rw permission not understood in this context, try ro",
infoerr.ErrorKindError)
}
outputSection := TypeSection { }
outputSection.where = analyzer.currentPosition
section = &outputSection
analyzer.addSection(section)
inputSection := analyzer.currentSection.(parser.TypeSection)
outputSection.location = analyzer.currentSection.Location()
if inputSection.Permission() == types.PermissionReadWrite {
err = inputSection.NewError (
"read-write (rw) permission not understood in this " +
"context, try read-only (ro)",
infoerr.ErrorKindError)
return
}
outputSection.permission = inputSection.Permission()
// get inherited type
outputSection.what, err = analyzer.analyzeType(inputSection.Type())
if err != nil { return }
if !inputSection.Argument().Nil() {
outputSection.argument,
err = analyzer.analyzeArgument(inputSection.Argument())
if err != nil { return }
// type check default value
err = analyzer.typeCheck (
outputSection.argument,
outputSection.what)
if err != nil { return }
}
// analyze members
isObj := outputSection.what.underlyingPrimitive() == &PrimitiveObj
if isObj {
err = analyzer.analyzeObjectMembers (
&outputSection,
inputSection)
if err != nil { return }
} else if inputSection.MembersLength() > 0 {
// if there are members, and the inherited type does not have
// Obj as a primitive, throw an error.
err = inputSection.Member(0).NewError (
"members can only be defined on types descending " +
"from Obj",
infoerr.ErrorKindError)
if err != nil { return }
}
outputSection.complete = true
return
}
// analyzeObjectMembers analyzes object members from a parser type section into
// a semantic type section.
func (analyzer *analysisOperation) analyzeObjectMembers (
into *TypeSection,
from parser.TypeSection,
) (
err error,
) {
inheritedSection := into.what.actual.(*TypeSection)
inheritsFromSameModule := analyzer.inCurrentModule(inheritedSection)
for index := 0; index < from.MembersLength(); index ++ {
inputMember := from.Member(index)
outputMember := ObjectMember { }
outputMember.location = inputMember.Location()
outputMember.name = inputMember.Name()
outputMember.permission = inputMember.Permission()
outputMember.bitWidth = inputMember.BitWidth()
inheritedMember, exists :=
inheritedSection.Member(inputMember.Name())
if exists {
// modifying default value/permissions of an
// inherited member
canAccessMember :=
inheritsFromSameModule ||
inheritedMember.permission !=
types.PermissionPrivate
if !canAccessMember {
err = inputMember.NewError (
"inherited member is private (pv) in " +
"parent type, and cannot be modified " +
"here",
infoerr.ErrorKindError)
return
}
outputMember.what = inheritedMember.what
if !inputMember.Type().Nil() {
err = inputMember.NewError (
"cannot override type of " +
"inherited member",
infoerr.ErrorKindError)
return
}
if outputMember.permission > inheritedMember.permission {
err = inputMember.NewError (
"cannot relax permission of " +
"inherited member",
infoerr.ErrorKindError)
return
}
canOverwriteMember :=
inheritsFromSameModule ||
inheritedMember.permission ==
types.PermissionReadWrite
// apply default value
if inputMember.Argument().Nil() {
// if it is unspecified, inherit it
outputMember.argument = inheritedMember.argument
} else {
if !canOverwriteMember {
err = inputMember.Argument().NewError (
"member is read-only (ro) in " +
"parent type, its default " +
"value cannot be overridden",
infoerr.ErrorKindError)
return
}
outputMember.argument,
err = analyzer.analyzeArgument(inputMember.Argument())
if err != nil { return }
// type check default value
err = analyzer.typeCheck (
outputMember.argument,
outputMember.what)
if err != nil { return }
}
} else {
// defining a new member
if inputMember.Type().Nil() {
err = inputMember.NewError (
"new members must be given a " +
"type",
infoerr.ErrorKindError)
return
}
outputMember.what, err = analyzer.analyzeType (
inputMember.Type())
if err != nil { return }
// apply default value
if !inputMember.Argument().Nil() {
outputMember.argument,
err = analyzer.analyzeArgument(inputMember.Argument())
if err != nil { return }
// type check default value
err = analyzer.typeCheck (
outputMember.argument,
outputMember.what)
if err != nil { return }
}
}
// ensure all member names are unique
for _, compareMember := range into.members {
if compareMember.name == outputMember.name {
err = inputMember.NewError (
"object member names must be unique",
infoerr.ErrorKindError)
return
}
}
into.members = append (
into.members,
outputMember)
}
return
}

View File

@@ -4,8 +4,46 @@ import "testing"
func TestTypeSection (test *testing.T) {
checkTree ("../tests/analyzer/typeSection", false,
`
typeSection ../tests/analyzer/typeSection.basicInt
type basic Int
`typeSection ro ../tests/analyzer/typeSection/required.aBasic
type 1 basic Int
typeSection ro ../tests/analyzer/typeSection/required.bBird
type 1 basic Obj
member rw wing
type 1 basic Int
uintLiteral 2
typeSection ro ../tests/analyzer/typeSection.aBasicInt
type 1 basic Int
uintLiteral 5
typeSection ro ../tests/analyzer/typeSection.bOnBasicInt
type 1 basic aBasicInt
typeSection ro ../tests/analyzer/typeSection.cBasicObject
type 1 basic Obj
member ro that
type 1 basic UInt
member ro this
type 1 basic Int
typeSection ro ../tests/analyzer/typeSection.dInheritFromOther
type 1 basic aBasic
typeSection ro ../tests/analyzer/typeSection.eInheritObject
type 1 basic cBasicObject
member ro that
type 1 basic UInt
uintLiteral 5
typeSection ro ../tests/analyzer/typeSection.fInheritObjectFromOther
type 1 basic bBird
member ro wing
type 1 basic Int
uintLiteral 2
member ro beak
type 1 basic Int
uintLiteral 238
typeSection ro ../tests/analyzer/typeSection.gPointer
type 1 pointer {
type 1 basic Int
}
typeSection ro ../tests/analyzer/typeSection.hDynamicArray
type 1 dynamicArray {
type 1 basic Int
}
`, test)
}

View File

@@ -1,6 +1,6 @@
package analyzer
import "git.tebibyte.media/arf/arf/types"
import "fmt"
import "git.tebibyte.media/arf/arf/parser"
import "git.tebibyte.media/arf/arf/infoerr"
@@ -16,36 +16,13 @@ const (
// TypeKindVariableArray means it's an array of variable length.
TypeKindVariableArray
// TypeKindObject means it's a structured type with members.
TypeKindObject
)
// ObjectMember is a member of an object type.
type ObjectMember struct {
name string
// even if there is a private permission in another module, we still
// need to include it in the semantic analysis because we need to know
// how many members objects have.
permission types.Permission
what Type
}
func (member ObjectMember) ToString (indent int) (output string) {
output += doIndent (
indent,
member.name, " ",
member.permission.ToString(),
"\n")
output += member.what.ToString(indent + 1)
return
}
// Type represents a description of a type. It must eventually point to a
// TypeSection.
type Type struct {
locatable
// one of these must be nil.
actual Section
points *Type
@@ -53,18 +30,13 @@ type Type struct {
mutable bool
kind TypeKind
primitiveCache *TypeSection
singularCache *bool
// if this is greater than 1, it means that this is a fixed-length array
// of whatever the type is. even if the type is a variable length array.
// because literally why not.
length uint64
// this is only applicable for a TypeKindObject where new members are
// defined.
// TODO: do not add members from parent type. instead have a member
// function to discern whether this type contains a particular member,
// and have it recurse all the way up the family tree. it will be the
// translator's job to worry about what members are placed where.
members []ObjectMember
}
// ToString returns all data stored within the type, in string form.
@@ -81,9 +53,7 @@ func (what Type) ToString (indent int) (output string) {
case TypeKindPointer:
output += " pointer"
case TypeKindVariableArray:
output += " variableArray"
case TypeKindObject:
output += " object"
output += " dynamicArray"
}
if what.points != nil {
@@ -93,19 +63,220 @@ func (what Type) ToString (indent int) (output string) {
}
if what.actual != nil {
output += what.actual.Name()
output += " " + what.actual.Name()
}
output += "\n"
return
}
// underlyingPrimitive returns the primitive that this type eventually inherits
// from. If the type ends up pointing to something, this returns nil.
func (what Type) underlyingPrimitive () (underlying *TypeSection) {
// if we have already done this operation, return the cahced result.
if what.primitiveCache != nil {
underlying = what.primitiveCache
return
}
if what.kind != TypeKindBasic {
// if we point to something, return nil because there is no void
// pointer bullshit in this language
return
}
actual := what.actual
switch actual {
case
&PrimitiveF32,
&PrimitiveF64,
&PrimitiveObj,
&PrimitiveU64,
&PrimitiveU32,
&PrimitiveU16,
&PrimitiveU8,
&PrimitiveI64,
&PrimitiveI32,
&PrimitiveI16,
&PrimitiveI8,
&PrimitiveUInt,
&PrimitiveInt:
underlying = actual.(*TypeSection)
return
case nil:
panic (
"invalid state: Type.actual is nil for " +
what.Describe() + " " +
what.locatable.location.Describe())
default:
// if none of the primitives matched, recurse.
switch actual.(type) {
case *TypeSection:
underlying =
actual.(*TypeSection).
what.underlyingPrimitive()
// case *FaceSection:
// TODO: depending on if this is an object interface or
// a function interface, return either Face or Func.
// we can assume this because of inheritence rules.
case *EnumSection:
underlying =
actual.(*EnumSection).
what.underlyingPrimitive()
default:
panic (
"invalid state: type " + what.Describe() +
"has illegal actual " +
what.locatable.location.Describe())
}
return
}
}
// isNumeric returns whether or not the type descends from a numeric primitive.
func (what Type) isNumeric () (numeric bool) {
primitive := what.underlyingPrimitive()
switch primitive {
case
&PrimitiveF64,
&PrimitiveF32,
&PrimitiveI64,
&PrimitiveI32,
&PrimitiveI16,
&PrimitiveI8,
&PrimitiveInt,
&PrimitiveU64,
&PrimitiveU32,
&PrimitiveU16,
&PrimitiveU8,
&PrimitiveUInt:
numeric = true
}
return
}
// isSignedNumeric returns whether or not the type descends from a signed
// numeric primitive.
func (what Type) isSignedNumeric () (signedNumeric bool) {
primitive := what.underlyingPrimitive()
switch primitive {
case
&PrimitiveF64,
&PrimitiveF32,
&PrimitiveI64,
&PrimitiveI32,
&PrimitiveI16,
&PrimitiveI8,
&PrimitiveInt:
signedNumeric = true
}
return
}
// isSingular returns whether or not the type is a singular value. this goes
// all the way up the inheritence chain, only stopping when it hits a non-basic
// type because this is about data storage of a value.
func (what Type) isSingular () (singular bool) {
// if we have already done this operation, return the cahced result.
if what.singularCache != nil {
singular = *what.singularCache
return
}
if what.length > 0 {
singular = true
return
}
// decide whether or not to recurse
if what.kind != TypeKindBasic { return }
actual := what.actual
if actual == nil {
return
} else {
switch actual.(type) {
case *TypeSection:
singular = actual.(*TypeSection).what.isSingular()
// TODO: uncomment this when this section has been implemented
// case *FaceSection:
// singular = true
case *EnumSection:
singular = actual.(*EnumSection).what.isSingular()
default:
panic (
"invalid state: type " + what.Describe() +
"has illegal actual " +
what.locatable.location.Describe())
}
}
return
}
// reduce ascends up the inheritence chain and gets the first type it finds that
// isn't basic. If the type has a clear path of inheritence to a simple
// primitive, there will be no non-basic types in the chain and this method will
// return false for reducible. If the type this method is called on is not
// basic, it itself is returned.
func (what Type) reduce () (reduced Type, reducible bool) {
reducible = true
// returns itself if it is not basic (cannot be reduced further)
if what.kind != TypeKindBasic {
reduced = what
return
}
// if we can't recurse, return false for reducible
if what.actual == nil {
reducible = false
return
}
// otherwise, recurse
switch what.actual.(type) {
case *TypeSection:
reduced, reducible = what.actual.(*TypeSection).what.reduce()
// TODO: uncomment this when his section has been implemented
// case *FaceSection:
// singular = true
case *EnumSection:
reduced, reducible = what.actual.(*EnumSection).what.reduce()
default:
panic (
"invalid state: type " + what.Describe() +
"has illegal actual " +
what.locatable.location.Describe())
}
return
}
// analyzeType analyzes a type specifier.
func (analyzer AnalysisOperation) analyzeType (
func (analyzer analysisOperation) analyzeType (
inputType parser.Type,
) (
outputType Type,
err error,
) {
outputType.mutable = inputType.Mutable()
outputType.mutable = inputType.Mutable()
outputType.length = inputType.Length()
outputType.location = inputType.Location()
if outputType.length < 1 {
err = inputType.NewError (
"cannot specify a length of zero",
@@ -113,14 +284,109 @@ func (analyzer AnalysisOperation) analyzeType (
return
}
// analyze type this type points to, if it exists
switch inputType.Kind() {
case parser.TypeKindBasic:
outputType.kind = TypeKindBasic
case parser.TypeKindPointer:
outputType.kind = TypeKindPointer
case parser.TypeKindVariableArray:
outputType.kind = TypeKindVariableArray
}
if inputType.Kind() != parser.TypeKindBasic {
// analyze type this type points to, if it exists
var points Type
points, err = analyzer.analyzeType(inputType.Points())
outputType.points = &points
} else {
// analyze the type section this type uses
var node any
var bitten parser.Identifier
node,
bitten,
err = analyzer.fetchNodeFromIdentifier(inputType.Name())
if err != nil { return }
if bitten.Length() > 0 {
err = bitten.NewError(
"cannot use member selection in this context",
infoerr.ErrorKindError)
return
}
switch node.(type) {
// TODO: uncomment once these sections are implemented
case *TypeSection, *EnumSection /* , *FaceSection */:
outputType.actual = node.(Section)
default:
err = inputType.Name().NewError (
"this must refer to a type, interface, or enum",
infoerr.ErrorKindError)
return
}
}
// TODO
return
}
// Describe provides a human readable description of the type. The value of this
// should not be computationally analyzed.
func (what Type) Describe () (description string) {
if what.kind == TypeKindBasic {
actual := what.actual
switch actual {
case &PrimitiveF32:
description += "F32"
case &PrimitiveF64:
description += "F64"
case &PrimitiveObj:
description += "Obj"
case &PrimitiveU64:
description += "U64"
case &PrimitiveU32:
description += "U32"
case &PrimitiveU16:
description += "U16"
case &PrimitiveU8:
description += "U8"
case &PrimitiveI64:
description += "I64"
case &PrimitiveI32:
description += "I32"
case &PrimitiveI16:
description += "I16"
case &PrimitiveI8:
description += "I8"
case &PrimitiveUInt:
description += "UInt"
case &PrimitiveInt:
description += "Int"
// case &PrimitiveFunc:
// description += "Func"
// case &PrimitiveFace:
// description += "Face"
case &BuiltInString:
description += "String"
case nil:
description += "NIL-TYPE-ACTUAL"
default:
description += actual.ModuleName() + "." + actual.Name()
return
}
} else {
description += "{"
description += what.points.Describe()
description += "}"
}
if what.length != 1 {
description += fmt.Sprint(":", what.length)
}
return
}

View File

@@ -43,7 +43,11 @@ func (location *Location) SetWidth (width int) {
func (location Location) Describe () (description string) {
return fmt.Sprint (
"in ", location.file.Path(),
" row ", location.row,
" column ", location.column,
" row ", location.row + 1,
" column ", location.column + 1,
" width ", location.width)
}
// TODO: add extend method that extends that takes in another location, and
// returns a new location that spans the two. then, use it in the parser to
// properly locate an entire tree node.

View File

@@ -25,6 +25,10 @@ func NewError (
) (
err Error,
) {
if location.File() == nil {
panic("cannot create new Error in a blank file")
}
return Error {
Location: location,
message: message,

View File

@@ -1,3 +1,8 @@
/*
Package lexer implements a tokenizer for the ARF language. It contains a
function called Tokenize which takes in a file from the ARF file package, and
outputs an array of tokens.
*/
package lexer
import "io"
@@ -5,8 +10,8 @@ import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/infoerr"
// LexingOperation holds information about an ongoing lexing operataion.
type LexingOperation struct {
// lexingOperation holds information about an ongoing lexing operataion.
type lexingOperation struct {
file *file.File
char rune
tokens []Token
@@ -14,7 +19,7 @@ type LexingOperation struct {
// Tokenize converts a file into a slice of tokens (lexemes).
func Tokenize (file *file.File) (tokens []Token, err error) {
lexer := LexingOperation { file: file }
lexer := lexingOperation { file: file }
err = lexer.tokenize()
tokens = lexer.tokens
@@ -28,7 +33,7 @@ func Tokenize (file *file.File) (tokens []Token, err error) {
// tokenize converts a file into a slice of tokens (lexemes). It will always
// return a non-nil error, but if nothing went wrong it will return io.EOF.
func (lexer *LexingOperation) tokenize () (err error) {
func (lexer *lexingOperation) tokenize () (err error) {
// check to see if the beginning of the file says :arf
var shebangCheck = []rune(":arf\n")
for index := 0; index < 5; index ++ {
@@ -66,6 +71,8 @@ func (lexer *LexingOperation) tokenize () (err error) {
if err != nil { return }
}
// TODO: figure out why this is here and what its proper place is
// because it is apparently unreachable
if lexer.tokens[len(lexer.tokens) - 1].kind != TokenKindNewline {
token := lexer.newToken()
token.kind = TokenKindNewline
@@ -75,7 +82,7 @@ func (lexer *LexingOperation) tokenize () (err error) {
return
}
func (lexer *LexingOperation) tokenizeAlphaBeginning () (err error) {
func (lexer *lexingOperation) tokenizeAlphaBeginning () (err error) {
token := lexer.newToken()
token.kind = TokenKindName
@@ -109,7 +116,7 @@ func (lexer *LexingOperation) tokenizeAlphaBeginning () (err error) {
return
}
func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
func (lexer *lexingOperation) tokenizeSymbolBeginning () (err error) {
switch lexer.char {
case '#':
// comment
@@ -167,10 +174,8 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
token.kind = TokenKindNewline
lexer.addToken(token)
err = lexer.nextRune()
case '"':
err = lexer.tokenizeString(false)
case '\'':
err = lexer.tokenizeString(true)
err = lexer.tokenizeString()
case ':':
token := lexer.newToken()
token.kind = TokenKindColon
@@ -387,7 +392,7 @@ func (lexer *LexingOperation) tokenizeSymbolBeginning () (err error) {
return
}
func (lexer *LexingOperation) tokenizeDashBeginning () (err error) {
func (lexer *lexingOperation) tokenizeDashBeginning () (err error) {
token := lexer.newToken()
err = lexer.nextRune()
if err != nil { return }
@@ -424,17 +429,17 @@ func (lexer *LexingOperation) tokenizeDashBeginning () (err error) {
}
// newToken creates a new token from the lexer's current position in the file.
func (lexer *LexingOperation) newToken () (token Token) {
func (lexer *lexingOperation) newToken () (token Token) {
return Token { location: lexer.file.Location(1) }
}
// addToken adds a new token to the lexer's token slice.
func (lexer *LexingOperation) addToken (token Token) {
func (lexer *lexingOperation) addToken (token Token) {
lexer.tokens = append(lexer.tokens, token)
}
// skipSpaces skips all space characters (not tabs or newlines)
func (lexer *LexingOperation) skipSpaces () (err error) {
func (lexer *lexingOperation) skipSpaces () (err error) {
for lexer.char == ' ' {
err = lexer.nextRune()
if err != nil { return }
@@ -444,7 +449,7 @@ func (lexer *LexingOperation) skipSpaces () (err error) {
}
// nextRune advances the lexer to the next rune in the file.
func (lexer *LexingOperation) nextRune () (err error) {
func (lexer *lexingOperation) nextRune () (err error) {
lexer.char, _, err = lexer.file.ReadRune()
if err != nil && err != io.EOF {
return infoerr.NewError (

View File

@@ -73,7 +73,7 @@ func compareErr (
correctWidth int,
test *testing.T,
) {
test.Log("testing errors in", filePath)
test.Log("testing error in", filePath)
file, err := file.Open(filePath)
if err != nil {
test.Log(err)
@@ -81,12 +81,28 @@ func compareErr (
return
}
_, err = Tokenize(file)
check := err.(infoerr.Error)
tokens, err := Tokenize(file)
check, isCorrectType := err.(infoerr.Error)
for index, token := range tokens {
test.Log(index, "\tgot token:", token.Describe())
}
if err == nil {
test.Log("no error was recieved, test failed.")
test.Fail()
return
}
test.Log("error that was recieved:")
test.Log(check)
if !isCorrectType {
test.Log("error is not infoerr.Error, something has gone wrong.")
test.Fail()
return
}
if check.Kind() != correctKind {
test.Log("mismatched error kind")
test.Log("- want:", correctKind)
@@ -132,7 +148,7 @@ func TestTokenizeAll (test *testing.T) {
quickToken(9, TokenKindUInt, uint64(932748397)),
quickToken(12, TokenKindFloat, 239485.37520),
quickToken(16, TokenKindString, "hello world!\n"),
quickToken(3, TokenKindRune, 'E'),
quickToken(3, TokenKindString, "E"),
quickToken(10, TokenKindName, "helloWorld"),
quickToken(1, TokenKindColon, nil),
quickToken(1, TokenKindDot, nil),
@@ -215,18 +231,17 @@ func TestTokenizeNumbers (test *testing.T) {
func TestTokenizeText (test *testing.T) {
checkTokenSlice("../tests/lexer/text.arf", test,
quickToken(34, TokenKindString, "hello world!\a\b\f\n\r\t\v'\"\\"),
quickToken(32, TokenKindString, "hello world!\a\b\f\n\r\t\v'\\"),
quickToken(1, TokenKindNewline, nil),
quickToken(4, TokenKindRune, '\a'),
quickToken(4, TokenKindRune, '\b'),
quickToken(4, TokenKindRune, '\f'),
quickToken(4, TokenKindRune, '\n'),
quickToken(4, TokenKindRune, '\r'),
quickToken(4, TokenKindRune, '\t'),
quickToken(4, TokenKindRune, '\v'),
quickToken(4, TokenKindRune, '\''),
quickToken(4, TokenKindRune, '"' ),
quickToken(4, TokenKindRune, '\\'),
quickToken(4, TokenKindString, "\a"),
quickToken(4, TokenKindString, "\b"),
quickToken(4, TokenKindString, "\f"),
quickToken(4, TokenKindString, "\n"),
quickToken(4, TokenKindString, "\r"),
quickToken(4, TokenKindString, "\t"),
quickToken(4, TokenKindString, "\v"),
quickToken(4, TokenKindString, "'"),
quickToken(4, TokenKindString, "\\"),
quickToken(1, TokenKindNewline, nil),
quickToken(35, TokenKindString, "hello world \x40\u0040\U00000040!"),
quickToken(1, TokenKindNewline, nil),
@@ -251,21 +266,16 @@ func TestTokenizeIndent (test *testing.T) {
)
}
func TestTokenizeErr (test *testing.T) {
func TestTokenizeErrUnexpectedSymbol (test *testing.T) {
compareErr (
"../tests/lexer/error/unexpectedSymbol.arf",
infoerr.ErrorKindError,
"unexpected symbol character ;",
1, 5, 1,
test)
compareErr (
"../tests/lexer/error/excessDataRune.arf",
infoerr.ErrorKindError,
"excess data in rune literal",
1, 1, 7,
test)
}
func TestTokenizeErrUnknownEscape (test *testing.T) {
compareErr (
"../tests/lexer/error/unknownEscape.arf",
infoerr.ErrorKindError,

View File

@@ -4,7 +4,7 @@ import "strconv"
import "git.tebibyte.media/arf/arf/infoerr"
// tokenizeSymbolBeginning lexes a token that starts with a number.
func (lexer *LexingOperation) tokenizeNumberBeginning (negative bool) (err error) {
func (lexer *lexingOperation) tokenizeNumberBeginning (negative bool) (err error) {
var intNumber uint64
var floatNumber float64
var isFloat bool
@@ -107,7 +107,7 @@ func runeIsDigit (char rune, radix uint64) (isDigit bool) {
}
// tokenizeNumber reads and tokenizes a number with the specified radix.
func (lexer *LexingOperation) tokenizeNumber (
func (lexer *lexingOperation) tokenizeNumber (
radix uint64,
) (
intNumber uint64,

View File

@@ -4,15 +4,15 @@ import "strconv"
import "git.tebibyte.media/arf/arf/infoerr"
// tokenizeString tokenizes a string or rune literal.
func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
func (lexer *lexingOperation) tokenizeString () (err error) {
token := lexer.newToken()
err = lexer.nextRune()
if err != nil { return }
token := lexer.newToken()
got := ""
got := ""
tokenWidth := 2
beginning := lexer.file.Location(1)
for {
if lexer.char == '\\' {
err = lexer.nextRune()
@@ -34,32 +34,14 @@ func (lexer *LexingOperation) tokenizeString (isRuneLiteral bool) (err error) {
if err != nil { return }
}
if isRuneLiteral {
if lexer.char == '\'' { break }
} else {
if lexer.char == '"' { break }
}
if lexer.char == '\'' { break }
}
err = lexer.nextRune()
if err != nil { return }
beginning.SetWidth(len(got))
if isRuneLiteral {
if len(got) > 1 {
err = infoerr.NewError (
beginning,
"excess data in rune literal",
infoerr.ErrorKindError)
return
}
token.kind = TokenKindRune
token.value = rune([]rune(got)[0])
} else {
token.kind = TokenKindString
token.value = got
}
token.kind = TokenKindString
token.value = got
token.location.SetWidth(tokenWidth)
lexer.addToken(token)
@@ -77,12 +59,11 @@ var escapeSequenceMap = map[rune] rune {
't': '\x09',
'v': '\x0b',
'\'': '\'',
'"': '"',
'\\': '\\',
}
// getEscapeSequence reads an escape sequence in a string or rune literal.
func (lexer *LexingOperation) getEscapeSequence () (
func (lexer *lexingOperation) getEscapeSequence () (
result rune,
amountRead int,
err error,

View File

@@ -19,7 +19,6 @@ const (
TokenKindUInt
TokenKindFloat
TokenKindString
TokenKindRune
TokenKindName
@@ -156,8 +155,6 @@ func (tokenKind TokenKind) Describe () (description string) {
description = "Float"
case TokenKindString:
description = "String"
case TokenKindRune:
description = "Rune"
case TokenKindName:
description = "Name"
case TokenKindColon:

View File

@@ -3,8 +3,17 @@ package parser
import "git.tebibyte.media/arf/arf/types"
// LookupSection looks returns the section under the give name. If the section
// does not exist, nil is returned.
func (tree SyntaxTree) LookupSection (name string) (section Section) {
// does not exist, nil is returned. If a method is being searched for, the type
// name of its receiver should be passed. If not, it should just be left blank.
func (tree SyntaxTree) LookupSection (
receiver string,
name string,
) (
section Section,
) {
if receiver != "" {
name = receiver + "_" + name
}
section = tree.sections[name]
return
}
@@ -35,10 +44,17 @@ func (identifier Identifier) Item (index int) (item string) {
return
}
// Bite removes the first item from the identifier and returns it.
func (identifier *Identifier) Bite () (item string) {
item = identifier.trail[0]
identifier.trail = identifier.trail[1:]
// Bite returns the first item of an identifier, and a copy of that identifier
// with that item removed. If there is nothing left to bite off, this method
// panics.
func (identifier Identifier) Bite () (item string, bitten Identifier) {
if len(identifier.trail) < 1 {
panic ("trying to bite an empty identifier")
}
bitten = identifier
item = bitten.trail[0]
bitten.trail = bitten.trail[1:]
return
}
@@ -48,6 +64,12 @@ func (what Type) Kind () (kind TypeKind) {
return
}
// Nil returns true if the type is nil, and false if it isn't.
func (what Type) Nil () (isNil bool) {
isNil = what.kind == TypeKindNil
return
}
// Mutable returns whether or not the type's data is mutable.
func (what Type) Mutable () (mutable bool) {
mutable = what.mutable
@@ -82,22 +104,22 @@ func (what Type) Points () (points Type) {
return
}
// MembersLength returns the amount of new members the type specifier defines.
// MembersLength returns the amount of new members the type section defines.
// If it defines no new members, it returns zero.
func (what Type) MembersLength () (length int) {
length = len(what.members)
func (section TypeSection) MembersLength () (length int) {
length = len(section.members)
return
}
// Member returns the member at index.
func (what Type) Member (index int) (member TypeMember) {
member = what.members[index]
func (section TypeSection) Member (index int) (member TypeSectionMember) {
member = section.members[index]
return
}
// BitWidth returns the bit width of the type member. If it is zero, it should
// be treated as unspecified.
func (member TypeMember) BitWidth () (width uint64) {
func (member TypeSectionMember) BitWidth () (width uint64) {
width = member.bitWidth
return
}
@@ -108,6 +130,12 @@ func (argument Argument) Kind () (kind ArgumentKind) {
return
}
// Nil returns true if the argument is nil, and false if it isn't.
func (argument Argument) Nil () (isNil bool) {
isNil = argument.kind == ArgumentKindNil
return
}
// Value returns the underlying value of the argument. You can use Kind() to
// find out what to cast this to.
func (argument Argument) Value () (value any) {
@@ -169,18 +197,6 @@ func (phrase Phrase) Kind () (kind PhraseKind) {
return
}
// ArgumentsLength returns the amount of arguments in the phrase.
func (phrase Phrase) ArgumentsLength () (length int) {
length = len(phrase.arguments)
return
}
// Argument returns the argument at index.
func (phrase Phrase) Argument (index int) (argument Argument) {
argument = phrase.arguments[index]
return
}
// ReturneesLength returns the amount of things the phrase returns to.
func (phrase Phrase) ReturneesLength () (length int) {
length = len(phrase.returnees)

View File

@@ -10,12 +10,13 @@ var validArgumentStartTokens = []lexer.TokenKind {
lexer.TokenKindUInt,
lexer.TokenKindFloat,
lexer.TokenKindString,
lexer.TokenKindRune,
lexer.TokenKindLBracket,
lexer.TokenKindLBrace,
lexer.TokenKindLParen,
}
func (parser *ParsingOperation) parseArgument () (argument Argument, err error) {
func (parser *parsingOperation) parseArgument () (argument Argument, err error) {
argument.location = parser.token.Location()
err = parser.expect(validArgumentStartTokens...)
@@ -75,15 +76,18 @@ func (parser *ParsingOperation) parseArgument () (argument Argument, err error)
argument.value = parser.token.Value().(string)
parser.nextToken()
case lexer.TokenKindRune:
argument.kind = ArgumentKindRune
argument.value = parser.token.Value().(rune)
parser.nextToken()
case lexer.TokenKindLBracket:
argument.kind = ArgumentKindPhrase
argument.kind = ArgumentKindPhrase
argument.value, err = parser.parseArgumentLevelPhrase()
case lexer.TokenKindLBrace:
argument.kind = ArgumentKindDereference
argument.value, err = parser.parseDereference()
case lexer.TokenKindLParen:
argument.kind = ArgumentKindList
argument.value, err = parser.parseList()
default:
panic (
"unimplemented argument kind " +

View File

@@ -4,7 +4,7 @@ import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// parse body parses the body of an arf file, after the metadata header.
func (parser *ParsingOperation) parseBody () (err error) {
func (parser *parsingOperation) parseBody () (err error) {
for {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
@@ -53,7 +53,17 @@ func (parser *ParsingOperation) parseBody () (err error) {
// addSection adds a section to the tree, ensuring it has a unique name within
// the module.
func (tree *SyntaxTree) addSection (section Section) (err error) {
_, exists := tree.sections[section.Name()]
index := section.Name()
funcSection, isFuncSection := section.(FuncSection)
if isFuncSection {
receiver := funcSection.receiver
if receiver != nil {
index = receiver.what.points.name.trail[0] + "_" + index
}
}
_, exists := tree.sections[index]
if exists {
err = section.NewError (
"cannot have multiple sections with the same name",
@@ -61,6 +71,6 @@ func (tree *SyntaxTree) addSection (section Section) (err error) {
return
}
tree.sections[section.Name()] = section
tree.sections[index] = section
return
}

View File

@@ -4,7 +4,7 @@ import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/lexer"
// parseData parses a data section.
func (parser *ParsingOperation) parseDataSection () (
func (parser *parsingOperation) parseDataSection () (
section DataSection,
err error,
) {
@@ -35,28 +35,37 @@ func (parser *ParsingOperation) parseDataSection () (
return
}
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
// check if data is external
if parser.token.Is(lexer.TokenKindIndent) &&
parser.token.Value().(int) == 1 {
// see if value exists
if parser.token.Is(lexer.TokenKindNewline) {
parser.nextToken()
// if we have exited the section, return
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
err = parser.nextToken(lexer.TokenKindName)
err = parser.nextToken()
if err != nil { return }
}
// check if external
if parser.token.Is(lexer.TokenKindName) {
if parser.token.Value().(string) == "external" {
section.external = true
err = parser.nextToken(lexer.TokenKindNewline)
if err != nil { return }
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
return
}
parser.previousToken()
}
// get value
section.argument, err = parser.parseArgument()
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
return
}

View File

@@ -6,14 +6,16 @@ func TestData (test *testing.T) {
checkTree ("../tests/parser/data", false,
`:arf
---
data ro aInteger:Int:<3202>
data ro bMutInteger:Int:mut:<3202>
data ro aInteger:Int
3202
data ro bMutInteger:Int:mut
3202
data ro cIntegerPointer:{Int}
data ro dMutIntegerPointer:{Int}:mut
data ro eIntegerArray16:Int:16
data ro fIntegerArrayVariable:{Int ..}
data ro gIntegerArrayInitialized:Int:16:
<
data ro gIntegerArrayInitialized:Int:16
(
3948
293
293049
@@ -25,35 +27,21 @@ data ro gIntegerArrayInitialized:Int:16:
0
4785
92
>
data rw hIntegerPointerInit:{Int}:<[& integer]>
data rw iMutIntegerPointerInit:{Int}:mut:<[& integer]>
data ro jObject:Obj:
(
.that:<324>
.this:<324>
)
data ro kNestedObject:Obj:
data rw hIntegerPointerInit:{Int}
[& integer]
data rw iMutIntegerPointerInit:{Int}:mut
[& integer]
data ro jObject:Obj
(
.ro newMember:Int:<9023>
):
(
.that:
(
.bird2:<123.8439>
.bird3:<9328.21348239>
)
.this:
(
.bird0:<324>
.bird1:<"hello world">
)
324
438
)
data ro lMutIntegerArray16:Int:16:mut
data ro mExternalData:Int:8
external
data ro nIntegerArrayInitialized:Int:16:mut:
<
data ro nIntegerArrayInitialized:Int:16:mut
(
3948
293
293049
@@ -65,6 +53,6 @@ data ro nIntegerArrayInitialized:Int:16:mut:
0
4785
92
>
)
`, test)
}

View File

@@ -1,180 +0,0 @@
package parser
//
// import "git.tebibyte.media/arf/arf/lexer"
// import "git.tebibyte.media/arf/arf/infoerr"
//
// // TODO:
// // (parser *ParsingOperation) parseDefaultValues
//
// // (parser *ParsingOperation) parseDefaultMemberValues (return tree of new members and a tree of member values)
// // (parser *ParsingOperation) parseDefaultArrayValues
//
// // (parser *ParsingOperation) parseDefaultMemberValue
// // (parser *ParsingOperation) parseMemberDeclaration
//
// // parsedefaultValues starts on the line after a = phrase, a data section, a
// // type section, or anywhere else where there can be a default value. It returns
// // a default value in the form of an argument, as well as any defined members
// // that it finds.
// func (parser *ParsingOperation) parseDefaultValues (
// baseIndent int,
// ) (
// argument Argument,
// members []TypeMember,
// err error,
// ) {
// // check if line is indented one more than baseIndent
// if !parser.token.Is(lexer.TokenKindIndent) { return }
// if parser.token.Value().(int) != baseIndent + 1 { return }
//
// argument.location = parser.token.Location()
//
// err = parser.nextToken()
// if err != nil { return }
//
// if parser.token.Is(lexer.TokenKindDot) {
//
// // object initialization
// parser.previousToken()
// var values ObjectDefaultValues
// values, err = parser.parseObjectdefaultValues()
// argument.kind = ArgumentKindObjectDefaultValues
// argument.value = values
//
// } else {
//
// // array initialization
// parser.previousToken()
// var values ArrayDefaultValues
// values, err = parser.parseArrayDefaultValues()
// argument.kind = ArgumentKindArrayDefaultValues
// argument.value = values
// }
//
// return
// }
//
// // parseObjectdefaultValues parses a list of object initialization
// // values until the indentation level drops.
// func (parser *ParsingOperation) parseObjectdefaultValues () (
// defaultValues ObjectDefaultValues,
// err error,
// ) {
// defaultValues.attributes = make(map[string] Argument)
//
// baseIndent := 0
// begin := true
//
// for {
// // if there is no indent we can just stop parsing
// if !parser.token.Is(lexer.TokenKindIndent) { break}
// indent := parser.token.Value().(int)
//
// if begin == true {
// defaultValues.location = parser.token.Location()
// baseIndent = indent
// begin = false
// }
//
// // do not parse any further if the indent has changed
// if indent != baseIndent { break }
//
// // move on to the beginning of the line, which must contain
// // a member initialization value
// err = parser.nextToken(lexer.TokenKindDot)
// if err != nil { return }
// err = parser.nextToken(lexer.TokenKindName)
// if err != nil { return }
// name := parser.token.Value().(string)
//
// // if the member has already been listed, throw an error
// _, exists := defaultValues.attributes[name]
// if exists {
// err = parser.token.NewError (
// "duplicate member \"" + name + "\" in object " +
// "member initialization",
// infoerr.ErrorKindError)
// return
// }
//
// // parse the argument determining the member initialization
// // value
// err = parser.nextToken()
// if err != nil { return }
// var value Argument
// if parser.token.Is(lexer.TokenKindNewline) {
//
// // recurse
// err = parser.nextToken(lexer.TokenKindIndent)
// if err != nil { return }
//
// value, err = parser.parseDefaultValues(baseIndent)
// defaultValues.attributes[name] = value
// if err != nil { return }
//
// } else {
//
// // parse as normal argument
// value, err = parser.parseArgument()
// defaultValues.attributes[name] = value
// if err != nil { return }
//
// err = parser.expect(lexer.TokenKindNewline)
// if err != nil { return }
// err = parser.nextToken()
// if err != nil { return }
// }
// }
//
// return
// }
//
// // parseArrayDefaultValues parses a list of array initialization values until
// // the indentation lexel drops.
// func (parser *ParsingOperation) parseArrayDefaultValues () (
// defaultValues ArrayDefaultValues,
// err error,
// ) {
// baseIndent := 0
// begin := true
//
// for {
// // if there is no indent we can just stop parsing
// if !parser.token.Is(lexer.TokenKindIndent) { break}
// indent := parser.token.Value().(int)
//
// if begin == true {
// defaultValues.location = parser.token.Location()
// baseIndent = indent
// begin = false
// }
//
// // do not parse any further if the indent has changed
// if indent != baseIndent { break }
//
// // move on to the beginning of the line, which must contain
// // arguments
// err = parser.nextToken(validArgumentStartTokens...)
// if err != nil { return }
//
// for {
// // stop parsing this line and go on to the next if a
// // newline token is encountered
// if parser.token.Is(lexer.TokenKindNewline) {
// err = parser.nextToken()
// if err != nil { return }
// break
// }
//
// // otherwise, parse the argument
// var argument Argument
// argument, err = parser.parseArgument()
// if err != nil { return }
// defaultValues.values = append (
// defaultValues.values,
// argument)
// }
// }
//
// return
// }

31
parser/dereference.go Normal file
View File

@@ -0,0 +1,31 @@
package parser
import "git.tebibyte.media/arf/arf/lexer"
func (parser *parsingOperation) parseDereference () (
dereference Dereference,
err error,
) {
err = parser.expect(lexer.TokenKindLBrace)
if err != nil { return }
dereference.location = parser.token.Location()
// parse the value we are dereferencing
err = parser.nextToken(validArgumentStartTokens...)
if err != nil { return }
dereference.argument, err = parser.parseArgument()
if err != nil { return }
// if there is an offset, parse it
err = parser.expect(lexer.TokenKindUInt, lexer.TokenKindRBrace)
if err != nil { return }
if parser.token.Is(lexer.TokenKindUInt) {
dereference.offset = parser.token.Value().(uint64)
}
err = parser.nextToken(lexer.TokenKindRBrace)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
return
}

View File

@@ -5,7 +5,7 @@ import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// parseEnumSection parses an enumerated type section.
func (parser *ParsingOperation) parseEnumSection () (
func (parser *parsingOperation) parseEnumSection () (
section EnumSection,
err error,
) {
@@ -51,7 +51,7 @@ func (parser *ParsingOperation) parseEnumSection () (
// parseEnumMembers parses a list of members for an enum section. Indentation
// level is assumed.
func (parser *ParsingOperation) parseEnumMembers (
func (parser *parsingOperation) parseEnumMembers (
into *EnumSection,
) (
err error,
@@ -61,27 +61,20 @@ func (parser *ParsingOperation) parseEnumMembers (
// if we've left the block, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
err = parser.nextToken(lexer.TokenKindMinus)
if err != nil { return }
var member EnumMember
member, err = parser.parseEnumMember()
into.members = append(into.members, member)
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
}
}
// parseEnumMember parses a single enum member. Indenttion level is assumed.
func (parser *ParsingOperation) parseEnumMember () (
func (parser *parsingOperation) parseEnumMember () (
member EnumMember,
err error,
) {
err = parser.expect(lexer.TokenKindMinus)
err = parser.nextToken(lexer.TokenKindMinus)
if err != nil { return }
// get name
@@ -91,32 +84,26 @@ func (parser *ParsingOperation) parseEnumMember () (
member.name = parser.token.Value().(string)
// see if value exists
err = parser.nextToken (
lexer.TokenKindColon,
lexer.TokenKindNewline)
err = parser.nextToken()
if err != nil { return }
if parser.token.Is(lexer.TokenKindColon) {
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
err = parser.skipWhitespace()
// if we have exited the member, return
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 2 { return }
err = parser.nextToken()
if err != nil { return }
err = parser.expect (
lexer.TokenKindLessThan,
lexer.TokenKindLParen)
if err != nil { return }
if parser.token.Is(lexer.TokenKindLessThan) {
// parse value
member.value, err = parser.parseBasicDefaultValue()
if err != nil { return }
} else if parser.token.Is(lexer.TokenKindLParen) {
// parse default values
member.value, err = parser.parseObjectDefaultValue()
if err != nil { return }
}
}
// get value
member.argument, err = parser.parseArgument()
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
return
}

View File

@@ -7,56 +7,18 @@ func TestEnum (test *testing.T) {
`:arf
---
enum ro AffrontToGod:Int:4
- bird0:
<
28394
9328
398
9
>
- bird1:
<
23
932832
398
2349
>
- bird2:
<
1
2
3
4
>
- bird0 (28394 9328 398 9)
- bird1 (23 932832 398 2349)
- bird2 (1 2 3 4)
enum ro NamedColor:U32
- red:<16711680>
- green:<65280>
- blue:<255>
enum ro ThisIsTerrible:Obj:
(
.rw x:Int
.rw y:Int
)
- up:
(
.x:<0>
.y:<-1>
)
- down:
(
.x:<0>
.y:<1>
)
- left:
(
.x:<-1>
.y:<0>
)
- right:
(
.x:<1>
.y:<0>
)
- red 16711680
- green 65280
- blue 255
enum ro ThisIsTerrible:Vector
- up (0 -1)
- down (0 1)
- left (-1 0)
- right (1 0)
enum ro Weekday:Int
- sunday
- monday

View File

@@ -5,15 +5,14 @@ import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// parseFaceSection parses an interface section.
func (parser *ParsingOperation) parseFaceSection () (
func (parser *parsingOperation) parseFaceSection () (
section FaceSection,
err error,
) {
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
section.behaviors = make(map[string] FaceBehavior)
section.location = parser.token.Location()
section.location = parser.token.Location()
// get permission
err = parser.nextToken(lexer.TokenKindPermission)
@@ -32,24 +31,61 @@ func (parser *ParsingOperation) parseFaceSection () (
if err != nil { return }
section.inherits, err = parser.parseIdentifier()
if err != nil { return }
err = parser.nextToken(lexer.TokenKindNewline)
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
err = parser.nextToken (
lexer.TokenKindName,
lexer.TokenKindGreaterThan,
lexer.TokenKindLessThan)
if err != nil { return }
if parser.token.Is(lexer.TokenKindName) {
// parse type interface
section.kind = FaceKindType
parser.previousToken()
section.behaviors, err = parser.parseFaceBehaviors()
if err != nil { return }
} else {
// parse function interface
section.kind = FaceKindFunc
parser.previousToken()
section.inputs,
section.outputs, err = parser.parseFaceBehaviorArguments(1)
if err != nil { return }
}
return
}
// parseFaceBehaviors parses a list of interface behaviors for an object
// interface.
func (parser *parsingOperation) parseFaceBehaviors () (
behaviors map[string] FaceBehavior,
err error,
) {
// parse members
behaviors = make(map[string] FaceBehavior)
for {
// if we've left the block, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
err = parser.nextToken(lexer.TokenKindName)
behaviorBeginning := parser.token.Location()
if err != nil { return }
// parse behavior
behaviorBeginning := parser.token.Location()
var behavior FaceBehavior
behavior, err = parser.parseFaceBehavior()
behavior, err = parser.parseFaceBehavior(1)
// add to section
_, exists := section.behaviors[behavior.name]
_, exists := behaviors[behavior.name]
if exists {
err = infoerr.NewError (
behaviorBeginning,
@@ -58,23 +94,21 @@ func (parser *ParsingOperation) parseFaceSection () (
infoerr.ErrorKindError)
return
}
section.behaviors[behavior.name] = behavior
behaviors[behavior.name] = behavior
if err != nil { return }
}
}
// parseFaceBehavior parses a single interface behavior. Indentation level is
// assumed.
func (parser *ParsingOperation) parseFaceBehavior () (
// parseFaceBehavior parses a single interface behavior.
func (parser *parsingOperation) parseFaceBehavior (
indent int,
) (
behavior FaceBehavior,
err error,
) {
err = parser.expect(lexer.TokenKindIndent)
if err != nil { return }
// get name
err = parser.nextToken(lexer.TokenKindName)
err = parser.expect(lexer.TokenKindName)
if err != nil { return }
behavior.name = parser.token.Value().(string)
@@ -82,11 +116,27 @@ func (parser *ParsingOperation) parseFaceBehavior () (
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
behavior.inputs,
behavior.outputs,
err = parser.parseFaceBehaviorArguments(indent + 1)
if err != nil { return }
return
}
func (parser *parsingOperation) parseFaceBehaviorArguments (
indent int,
) (
inputs []Declaration,
outputs []Declaration,
err error,
) {
for {
// if we've left the block, stop parsing
// if we've left the behavior, stop parsing
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 2 { return }
if parser.token.Value().(int) != indent { return }
// get preceding symbol
err = parser.nextToken (
@@ -109,19 +159,16 @@ func (parser *ParsingOperation) parseFaceBehavior () (
if err != nil { return }
declaration.what, err = parser.parseType()
if err != nil { return }
if kind == lexer.TokenKindGreaterThan {
inputs = append(inputs, declaration)
} else {
outputs = append(outputs, declaration)
}
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
if kind == lexer.TokenKindGreaterThan {
behavior.inputs = append (
behavior.inputs,
declaration)
} else {
behavior.outputs = append (
behavior.outputs,
declaration)
}
}
}

25
parser/face_test.go Normal file
View File

@@ -0,0 +1,25 @@
package parser
import "testing"
func TestFace (test *testing.T) {
checkTree ("../tests/parser/face", false,
`:arf
---
face ro aReadWriter:Face
read
> into:{Byte ..}
< read:Int
< err:Error
write
> data:{Byte ..}
< wrote:Int
< err:Error
face ro bDestroyer:Face
destroy
face ro cFuncInterface:Func
> something:Int
< someOutput:Int
< otherOutput:String
`, test)
}

View File

@@ -5,7 +5,7 @@ import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// parseFunc parses a function section.
func (parser *ParsingOperation) parseFuncSection () (
func (parser *parsingOperation) parseFuncSection () (
section FuncSection,
err error,
) {
@@ -75,7 +75,7 @@ func (parser *ParsingOperation) parseFuncSection () (
// parseFuncArguments parses a function's inputs, outputs, and reciever if that
// exists.
func (parser *ParsingOperation) parseFuncArguments (
func (parser *parsingOperation) parseFuncArguments (
into *FuncSection,
) (
err error,
@@ -171,7 +171,7 @@ func (parser *ParsingOperation) parseFuncArguments (
if err != nil { return }
case lexer.TokenKindLessThan:
output := Declaration { }
output := FuncOutput { }
output.location = parser.token.Location()
// get name
@@ -186,11 +186,32 @@ func (parser *ParsingOperation) parseFuncArguments (
if err != nil { return }
output.what, err = parser.parseType()
if err != nil { return }
into.outputs = append(into.outputs, output)
parser.expect(lexer.TokenKindNewline)
// skip newline if it is there
if parser.token.Is(lexer.TokenKindNewline) {
parser.nextToken()
// if we have exited the output, break
exited :=
!parser.token.Is(lexer.TokenKindIndent) ||
parser.token.Value().(int) != 2
if exited {
into.outputs = append(into.outputs, output)
break
}
err = parser.nextToken()
if err != nil { return }
}
// get default value
output.argument, err = parser.parseArgument()
into.outputs = append(into.outputs, output)
if err != nil { return }
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
}

View File

@@ -6,15 +6,15 @@ func TestFunc (test *testing.T) {
checkTree ("../tests/parser/func", false,
`:arf
---
func ro aBasicExternal
> someInput:Int:mut
< someOutput:Int:<4>
---
external
func ro bMethod
@ bird:{Bird}
> someInput:Int:mut
< someOutput:Int:<4>
< someOutput:Int 4
---
external
func ro aBasicExternal
> someInput:Int:mut
< someOutput:Int 4
---
external
func ro cBasicPhrases
@@ -25,7 +25,7 @@ func ro cBasicPhrases
[fn [gn 329 983 57] 123]
func ro dArgumentTypes
---
[bird tree butterfly.wing "hello world" grass:Int:8:mut]
[bird tree butterfly.wing 'hello world' grass:Int:8:mut]
func ro eMath
> x:Int
> y:Int
@@ -99,11 +99,19 @@ func ro gControlFlow
[nestedThing]
[else]
[otherThing]
func ro hSetPhrase
func ro hDataInit
---
[let x:Int:<3>]
[let y:{Int}:<[loc x]>]
[let z:Int:8:<398 9 2309 983 -2387 478 555 123>]
[let bird:Bird:(.that:(.whenYou:<99999>) .this:<324>)]
[= x:Int 3]
[= y:{Int} [loc x]]
[= z:Int:8 (398 9 2309 983 -2387 478 555 123)]
[= bird:Bird ((99999) 324)]
func ro iDereference
> x:{Int}
> y:{Int ..}
> z:Int:4
---
[= b:Int {x}]
[= c:Int {y 4}]
[= d:Int {z 3}]
`, test)
}

32
parser/list.go Normal file
View File

@@ -0,0 +1,32 @@
package parser
import "git.tebibyte.media/arf/arf/lexer"
// parseList parses a parenthetically delimited list of arguments.
func (parser *parsingOperation) parseList () (list List, err error) {
list.location = parser.token.Location()
err = parser.expect(lexer.TokenKindLParen)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
for {
err = parser.skipWhitespace()
if err != nil { return }
// if we have reached the end of the list, stop
if parser.token.Is(lexer.TokenKindRParen) { break }
// otherwise, parse argument
var argument Argument
argument, err = parser.parseArgument()
list.arguments = append(list.arguments, argument)
if err != nil { return }
}
err = parser.nextToken()
if err != nil { return }
return
}

View File

@@ -1,14 +1,11 @@
package parser
import "os"
import "path/filepath"
import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// parseMeta parsese the metadata header at the top of an arf file.
func (parser *ParsingOperation) parseMeta () (err error) {
cwd, _ := os.Getwd()
func (parser *parsingOperation) parseMeta () (err error) {
for {
err = parser.expect (
lexer.TokenKindName,
@@ -37,7 +34,7 @@ func (parser *ParsingOperation) parseMeta () (err error) {
case "require":
// if import path is relative, get absolute path.
if value[0] == '.' {
value = filepath.Join(cwd, value)
value = filepath.Join(parser.modulePath, value)
} else if value[0] != '/' {
// TODO: get arf import path from an env
// variable, and default to this if not found.

View File

@@ -8,11 +8,11 @@ func TestMeta (test *testing.T) {
cwd, _ := os.Getwd()
checkTree ("../tests/parser/meta", false,
`:arf
author "Sasha Koshka"
license "GPLv3"
require "` + filepath.Join(cwd, "./some/local/module") + `"
require "/some/absolute/path/to/someModule"
require "/usr/local/include/arf/someLibraryInstalledInStandardLocation"
author 'Sasha Koshka'
license 'GPLv3'
require '` + filepath.Join(cwd, "../tests/parser/meta/some/local/module") + `'
require '/usr/local/include/arf/someLibraryInstalledInStandardLocation'
require '/some/absolute/path/to/someModule'
---
`, test)
}

View File

@@ -3,7 +3,7 @@ package parser
import "git.tebibyte.media/arf/arf/lexer"
// parseIdentifier parses an identifier made out of dot separated names.
func (parser *ParsingOperation) parseIdentifier () (
func (parser *parsingOperation) parseIdentifier () (
identifier Identifier,
err error,
) {

View File

@@ -10,19 +10,19 @@ type locatable struct {
}
// Location returns the location of the node.
func (trait locatable) Location () (location file.Location) {
location = trait.location
func (node locatable) Location () (location file.Location) {
location = node.location
return
}
// NewError creates a new error at the node's location.
func (trait locatable) NewError (
func (node locatable) NewError (
message string,
kind infoerr.ErrorKind,
) (
err error,
) {
err = infoerr.NewError(trait.location, message, kind)
err = infoerr.NewError(node.location, message, kind)
return
}
@@ -32,8 +32,8 @@ type nameable struct {
}
// Name returns the name of the node.
func (trait nameable) Name () (name string) {
name = trait.name
func (node nameable) Name () (name string) {
name = node.name
return
}
// typeable allows a node to have a type.
@@ -42,8 +42,8 @@ type typeable struct {
}
// Type returns the type of the node.
func (trait typeable) Type () (what Type) {
what = trait.what
func (node typeable) Type () (what Type) {
what = node.what
return
}
@@ -53,18 +53,35 @@ type permissionable struct {
}
// Permission returns the permision of the node.
func (trait permissionable) Permission () (permission types.Permission) {
permission = trait.permission
func (node permissionable) Permission () (permission types.Permission) {
permission = node.permission
return
}
// valuable allows a node to have an argument value.
type valuable struct {
value Argument
argument Argument
}
// Value returns the value argument of the node.
func (trait valuable) Value () (value Argument) {
value = trait.value
// Argument returns the value argument of the node.
func (node valuable) Argument () (argument Argument) {
argument = node.argument
return
}
// multiValuable allows a node to have several argument values.
type multiValuable struct {
arguments []Argument
}
// Argument returns the argument at index.
func (node multiValuable) Argument (index int) (argument Argument) {
argument = node.arguments[index]
return
}
// Length returns the amount of arguments in the mode.
func (node multiValuable) Length () (length int) {
length = len(node.arguments)
return
}

View File

@@ -1,3 +1,15 @@
/*
Package parser implements a parser for the ARF language. It contains an abstract
syntax tree (SyntaxTree), various tree nodes, and a function called Fetch that
returns a SyntaxTree for the module located at the given path. Internally, the
parser caches parsing results so Fetch may be called frequently.
Trees returned by this package can be expected to be internally consistent and
syntactically corred, but not semantically correct. Ensuring the semantic
integrity of ARF code is the job of the analyzer package.
This package automatically invokes lexer before parsing module files.
*/
package parser
import "io"
@@ -7,8 +19,8 @@ import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// ParsingOperation holds information about an ongoing parsing operation.
type ParsingOperation struct {
// parsingOperation holds information about an ongoing parsing operation.
type parsingOperation struct {
modulePath string
token lexer.Token
tokens []lexer.Token
@@ -18,9 +30,9 @@ type ParsingOperation struct {
tree SyntaxTree
}
// Fetch returns the parsed module located at the specified path, and returns an
// abstract syntax tree. If the module has not yet been parsed, it parses it
// first.
// Fetch returns the parsed module located at the specified path as a
// SyntaxTree. If the module has not yet been parsed, it parses it first. If it
// has, it grabs it out of a cache. This function can be called frequently.
func Fetch (modulePath string, skim bool) (tree SyntaxTree, err error) {
if modulePath[0] != '/' {
panic("module path did not begin at filesystem root")
@@ -34,7 +46,7 @@ func Fetch (modulePath string, skim bool) (tree SyntaxTree, err error) {
}
// miss, so parse the module.
parser := ParsingOperation {
parser := parsingOperation {
modulePath: modulePath,
skimming: skim,
tree: SyntaxTree {
@@ -62,6 +74,8 @@ func Fetch (modulePath string, skim bool) (tree SyntaxTree, err error) {
// parse the tokens into the module
err = parser.parse(sourceFile)
if err == io.EOF { err = nil}
if err != nil { return }
}
tree = parser.tree
@@ -76,7 +90,7 @@ func Fetch (modulePath string, skim bool) (tree SyntaxTree, err error) {
}
// parse parses a file and adds it to the syntax tree.
func (parser *ParsingOperation) parse (sourceFile *file.File) (err error) {
func (parser *parsingOperation) parse (sourceFile *file.File) (err error) {
var tokens []lexer.Token
tokens, err = lexer.Tokenize(sourceFile)
if err != nil { return }
@@ -99,7 +113,7 @@ func (parser *ParsingOperation) parse (sourceFile *file.File) (err error) {
// expect takes in a list of allowed token kinds, and returns an error if the
// current token isn't one of them. If the length of allowed is zero, this
// function will not return an error.
func (parser *ParsingOperation) expect (allowed ...lexer.TokenKind) (err error) {
func (parser *parsingOperation) expect (allowed ...lexer.TokenKind) (err error) {
if len(allowed) == 0 { return }
for _, kind := range allowed {
@@ -129,7 +143,7 @@ func (parser *ParsingOperation) expect (allowed ...lexer.TokenKind) (err error)
}
// nextToken is the same as expect, but it advances to the next token first.
func (parser *ParsingOperation) nextToken (allowed ...lexer.TokenKind) (err error) {
func (parser *parsingOperation) nextToken (allowed ...lexer.TokenKind) (err error) {
parser.tokenIndex ++
if parser.tokenIndex >= len(parser.tokens) { return io.EOF }
parser.token = parser.tokens[parser.tokenIndex]
@@ -140,7 +154,7 @@ func (parser *ParsingOperation) nextToken (allowed ...lexer.TokenKind) (err erro
// previousToken goes back one token. If the parser is already at the beginning,
// this does nothing.
func (parser *ParsingOperation) previousToken () {
func (parser *parsingOperation) previousToken () {
parser.tokenIndex --
if parser.tokenIndex < 0 { parser.tokenIndex = 0 }
parser.token = parser.tokens[parser.tokenIndex]
@@ -149,17 +163,36 @@ func (parser *ParsingOperation) previousToken () {
// skipIndentLevel advances the parser, ignoring every line with an indentation
// equal to or greater than the specified indent.
func (parser *ParsingOperation) skipIndentLevel (indent int) (err error) {
func (parser *parsingOperation) skipIndentLevel (indent int) (err error) {
braceLevel := 0
parenLevel := 0
bracketLevel := 0
for {
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
if !parser.token.Is(lexer.TokenKindIndent) ||
parser.token.Value().(int) < indent {
shouldBreak :=
!parser.token.Is(lexer.TokenKindIndent) ||
parser.token.Value().(int) < indent
shouldBreak =
shouldBreak &&
braceLevel < 1 &&
parenLevel < 1 &&
bracketLevel < 1
return
}
if shouldBreak { return }
}
switch parser.token.Kind() {
case lexer.TokenKindLBrace: braceLevel ++
case lexer.TokenKindRBrace: braceLevel --
case lexer.TokenKindLParen: parenLevel ++
case lexer.TokenKindRParen: parenLevel --
case lexer.TokenKindLBracket: bracketLevel ++
case lexer.TokenKindRBracket: bracketLevel --
}
err = parser.nextToken()
@@ -168,7 +201,7 @@ func (parser *ParsingOperation) skipIndentLevel (indent int) (err error) {
}
// skipWhitespace skips over newlines and indent tokens.
func (parser *ParsingOperation) skipWhitespace () (err error) {
func (parser *parsingOperation) skipWhitespace () (err error) {
for {
isWhitespace :=
parser.token.Is(lexer.TokenKindIndent) ||

View File

@@ -87,7 +87,7 @@ var controlFlowKinds = []PhraseKind {
}
// parseBlock parses an indented block of phrases
func (parser *ParsingOperation) parseBlock (
func (parser *parsingOperation) parseBlock (
indent int,
) (
block Block,
@@ -108,7 +108,7 @@ func (parser *ParsingOperation) parseBlock (
// parseBlockLevelPhrase parses a phrase that is not being used as an argument
// to something else. This method is allowed to do things like parse return
// directions, and indented blocks beneath the phrase.
func (parser *ParsingOperation) parseBlockLevelPhrase (
func (parser *parsingOperation) parseBlockLevelPhrase (
indent int,
) (
phrase Phrase,
@@ -129,7 +129,10 @@ func (parser *ParsingOperation) parseBlockLevelPhrase (
// get command
err = parser.expect(validPhraseStartTokens...)
if err != nil { return }
phrase.command, phrase.kind, err = parser.parsePhraseCommand()
phrase.command,
phrase.kind,
phrase.operator,
err = parser.parsePhraseCommand()
if err != nil { return }
for {
@@ -223,7 +226,7 @@ func (parser *ParsingOperation) parseBlockLevelPhrase (
// parseArgumentLevelPhrase parses a phrase that is being used as an argument to
// something. It is forbidden from using return direction, and it must be
// delimited by brackets.
func (parser *ParsingOperation) parseArgumentLevelPhrase () (
func (parser *parsingOperation) parseArgumentLevelPhrase () (
phrase Phrase,
err error,
) {
@@ -233,7 +236,10 @@ func (parser *ParsingOperation) parseArgumentLevelPhrase () (
// get command
err = parser.nextToken(validPhraseStartTokens...)
if err != nil { return }
phrase.command, phrase.kind, err = parser.parsePhraseCommand()
phrase.command,
phrase.kind,
phrase.operator,
err = parser.parsePhraseCommand()
if err != nil { return }
for {
@@ -271,25 +277,23 @@ func (parser *ParsingOperation) parseArgumentLevelPhrase () (
}
// parsePhraseCommand parses the command argument of a phrase.
func (parser *ParsingOperation) parsePhraseCommand () (
command Argument,
kind PhraseKind,
err error,
func (parser *parsingOperation) parsePhraseCommand () (
command Argument,
kind PhraseKind,
operator lexer.TokenKind,
err error,
) {
if isTokenOperator(parser.token) {
err = parser.expect(operatorTokens...)
if err != nil { return }
command.location = parser.token.Location()
command.kind = ArgumentKindOperator
command.value = parser.token.Kind()
if parser.token.Is(lexer.TokenKindColon) {
kind = PhraseKindCase
} else if parser.token.Is(lexer.TokenKindAssignment) {
kind = PhraseKindAssign
} else {
kind = PhraseKindOperator
operator = parser.token.Kind()
}
err = parser.nextToken()
@@ -308,10 +312,10 @@ func (parser *ParsingOperation) parsePhraseCommand () (
identifier := command.value.(Identifier)
if len(identifier.trail) == 1 {
switch identifier.trail[0] {
case "let":
kind = PhraseKindLet
case "loc":
kind = PhraseKindReference
case "cast":
kind = PhraseKindCast
case "defer":
kind = PhraseKindDefer
case "if":

View File

@@ -21,7 +21,7 @@ func ro fComplexFunction
external
func ro gExternalFunction
> x:Int
< arr:Int
< arr:Int 5
---
external
`, test)

View File

@@ -34,15 +34,16 @@ func (tree SyntaxTree) ToString (indent int) (output string) {
output += doIndent(indent, ":arf\n")
if tree.author != "" {
output += doIndent(indent, "author \"", tree.author, "\"\n")
output += doIndent(indent, "author '", tree.author, "'\n")
}
if tree.license != "" {
output += doIndent(indent, "license \"", tree.license, "\"\n")
output += doIndent(indent, "license '", tree.license, "'\n")
}
for _, require := range tree.requires {
output += doIndent(indent, "require \"", require, "\"\n")
for _, name := range sortMapKeysAlphabetically(tree.requires) {
require := tree.requires[name]
output += doIndent(indent, "require '", require, "'\n")
}
output += doIndent(indent, "---\n")
@@ -66,84 +67,17 @@ func (identifier Identifier) ToString () (output string) {
return
}
func (values ObjectDefaultValues) ToString (
indent int,
breakLine bool,
) (
output string,
) {
if !breakLine { indent = 0 }
output += doIndent(indent, "(")
if breakLine { output += "\n" }
for index, name := range sortMapKeysAlphabetically(values) {
if index > 0 && !breakLine { output += " " }
value := values[name]
output += doIndent(indent, "." + name + ":")
isComplexDefaultValue :=
value.kind == ArgumentKindObjectDefaultValues ||
value.kind == ArgumentKindArrayDefaultValues
if isComplexDefaultValue {
if breakLine { output += "\n" }
output += value.ToString(indent + 1, breakLine)
} else {
output += "<"
output += value.ToString(indent + 1, false)
output += ">"
}
if breakLine { output += "\n" }
}
output += doIndent(indent, ")")
return
}
func (values ArrayDefaultValues) ToString (
indent int,
breakLine bool,
) (
output string,
) {
if !breakLine { indent = 0 }
output += doIndent(indent, "<")
if breakLine { output += "\n" }
for index, value := range values {
if index > 0 && !breakLine { output += " " }
output += value.ToString(indent, breakLine)
}
output += doIndent(indent, ">")
return
}
func (member TypeMember) ToString (indent int, breakLine bool) (output string) {
output += doIndent(indent, ".")
output += member.permission.ToString() + " "
output += member.name + ":"
output += member.what.ToString(indent + 1, breakLine)
if member.bitWidth > 0 {
output += fmt.Sprint(" & ", member.bitWidth)
func (what Type) ToString () (output string) {
if what.kind == TypeKindNil {
output += "NIL-TYPE"
return
}
if breakLine {
output += "\n"
}
return
}
func (what Type) ToString (indent int, breakLine bool) (output string) {
if what.kind == TypeKindBasic {
output += what.name.ToString()
} else {
output += "{"
output += what.points.ToString(indent, breakLine)
output += what.points.ToString()
if what.kind == TypeKindVariableArray {
output += " .."
@@ -159,46 +93,27 @@ func (what Type) ToString (indent int, breakLine bool) (output string) {
if what.mutable {
output += ":mut"
}
if what.members != nil {
if breakLine {
output += ":\n" + doIndent(indent, "(\n")
for _, member := range what.members {
output += member.ToString(indent, breakLine)
}
output += doIndent(indent, ")")
} else {
output += ":("
for index, member := range what.members {
if index > 0 { output += " " }
output += member.ToString(indent, breakLine)
}
output += ")"
}
}
defaultValueKind := what.defaultValue.kind
if defaultValueKind != ArgumentKindNil {
isComplexDefaultValue :=
defaultValueKind == ArgumentKindObjectDefaultValues ||
defaultValueKind == ArgumentKindArrayDefaultValues
if isComplexDefaultValue {
output += ":"
if breakLine { output += "\n" }
output += what.defaultValue.ToString(indent, breakLine)
} else {
output += ":<"
output += what.defaultValue.ToString(indent, false)
output += ">"
}
}
return
}
func (declaration Declaration) ToString (indent int) (output string) {
func (declaration Declaration) ToString () (output string) {
output += declaration.name + ":"
output += declaration.what.ToString(indent, false)
output += declaration.what.ToString()
return
}
func (list List) ToString (indent int, breakline bool) (output string) {
if !breakline { indent = 0 }
output += doIndent(indent, "(")
if breakline { output += "\n" }
for index, argument := range list.arguments {
if !breakline && index > 0 { output += " "}
output += argument.ToString(indent, breakline)
}
output += doIndent(indent, ")")
if breakline { output += "\n" }
return
}
@@ -216,13 +131,11 @@ func (argument Argument) ToString (indent int, breakLine bool) (output string) {
indent,
breakLine)
case ArgumentKindObjectDefaultValues:
output += argument.value.(ObjectDefaultValues).
ToString(indent, breakLine)
case ArgumentKindArrayDefaultValues:
output += argument.value.(ArrayDefaultValues).
ToString(indent, breakLine)
case ArgumentKindList:
output += argument.value.(List).ToString(indent, breakLine)
case ArgumentKindDereference:
output += argument.value.(Dereference).ToString(indent)
case ArgumentKindIdentifier:
output += doIndent (
@@ -233,7 +146,7 @@ func (argument Argument) ToString (indent int, breakLine bool) (output string) {
case ArgumentKindDeclaration:
output += doIndent (
indent,
argument.value.(Declaration).ToString(indent))
argument.value.(Declaration).ToString())
if breakLine { output += "\n" }
case ArgumentKindInt, ArgumentKindUInt, ArgumentKindFloat:
@@ -243,82 +156,7 @@ func (argument Argument) ToString (indent int, breakLine bool) (output string) {
case ArgumentKindString:
output += doIndent (
indent,
"\"" + argument.value.(string) + "\"")
if breakLine { output += "\n" }
case ArgumentKindRune:
output += doIndent (
indent,
"'" + string(argument.value.(rune)) + "'")
if breakLine { output += "\n" }
case ArgumentKindOperator:
var stringValue string
switch argument.value.(lexer.TokenKind) {
case lexer.TokenKindColon:
stringValue = ":"
case lexer.TokenKindPlus:
stringValue = "+"
case lexer.TokenKindMinus:
stringValue = "-"
case lexer.TokenKindIncrement:
stringValue = "++"
case lexer.TokenKindDecrement:
stringValue = "--"
case lexer.TokenKindAsterisk:
stringValue = "*"
case lexer.TokenKindSlash:
stringValue = "/"
case lexer.TokenKindExclamation:
stringValue = "!"
case lexer.TokenKindPercent:
stringValue = "%"
case lexer.TokenKindPercentAssignment:
stringValue = "%="
case lexer.TokenKindTilde:
stringValue = "~"
case lexer.TokenKindTildeAssignment:
stringValue = "~="
case lexer.TokenKindAssignment:
stringValue = "="
case lexer.TokenKindEqualTo:
stringValue = "=="
case lexer.TokenKindNotEqualTo:
stringValue = "!="
case lexer.TokenKindLessThanEqualTo:
stringValue = "<="
case lexer.TokenKindLessThan:
stringValue = "<"
case lexer.TokenKindLShift:
stringValue = "<<"
case lexer.TokenKindLShiftAssignment:
stringValue = "<<="
case lexer.TokenKindGreaterThan:
stringValue = ">"
case lexer.TokenKindGreaterThanEqualTo:
stringValue = ">="
case lexer.TokenKindRShift:
stringValue = ">>"
case lexer.TokenKindRShiftAssignment:
stringValue = ">>="
case lexer.TokenKindBinaryOr:
stringValue = "|"
case lexer.TokenKindBinaryOrAssignment:
stringValue = "|="
case lexer.TokenKindLogicalOr:
stringValue = "||"
case lexer.TokenKindBinaryAnd:
stringValue = "&"
case lexer.TokenKindBinaryAndAssignment:
stringValue = "&="
case lexer.TokenKindLogicalAnd:
stringValue = "&&"
case lexer.TokenKindBinaryXor:
stringValue = "^"
case lexer.TokenKindBinaryXorAssignment:
stringValue = "^="
}
output += doIndent(indent, stringValue)
"'" + argument.value.(string) + "'")
if breakLine { output += "\n" }
}
@@ -331,7 +169,11 @@ func (section DataSection) ToString (indent int) (output string) {
"data ",
section.permission.ToString(), " ",
section.name, ":",
section.what.ToString(indent + 1, true), "\n")
section.what.ToString(), "\n")
if section.argument.kind != ArgumentKindNil {
output += section.argument.ToString(indent + 1, true)
}
if section.external {
output += doIndent(indent + 1, "external\n")
@@ -340,37 +182,57 @@ func (section DataSection) ToString (indent int) (output string) {
return
}
func (member TypeSectionMember) ToString (indent int) (output string) {
output += doIndent(indent, member.permission.ToString())
output += " " + member.name
if member.what.kind != TypeKindNil {
output += ":" + member.what.ToString()
}
if member.argument.kind != ArgumentKindNil {
output += " " + member.argument.ToString(indent, false)
}
if member.bitWidth > 0 {
output += fmt.Sprint(" & ", member.bitWidth)
}
output += "\n"
return
}
func (section TypeSection) ToString (indent int) (output string) {
output += doIndent (
indent,
"type ",
section.permission.ToString(), " ",
section.name, ":",
section.what.ToString(indent + 1, true), "\n")
section.what.ToString(), "\n")
if section.argument.kind != ArgumentKindNil {
output += section.argument.ToString(indent + 1, true)
}
for _, member := range section.members {
output += member.ToString(indent + 1)
}
return
}
func (section EnumSection) ToString (indent int) (output string) {
output += doIndent (
indent,
"enum ",
section.permission.ToString(), " ",
section.name, ":",
section.what.ToString(indent + 1, true), "\n")
section.what.ToString(), "\n")
for _, member := range section.members {
output += doIndent(indent + 1, "- ", member.name)
isComplexInitialization :=
member.value.kind == ArgumentKindObjectDefaultValues ||
member.value.kind == ArgumentKindArrayDefaultValues
if isComplexInitialization {
output += ":\n"
output += member.value.ToString(indent + 2, true)
} else if member.value.kind != ArgumentKindNil {
output += ":<" + member.value.ToString(0, false) + ">"
if member.argument.kind != ArgumentKindNil {
output += " " + member.argument.ToString(indent, false)
}
output += "\n"
}
@@ -385,10 +247,21 @@ func (section FaceSection) ToString (indent int) (output string) {
section.name, ":",
section.inherits.ToString(), "\n")
for _, name := range sortMapKeysAlphabetically(section.behaviors) {
behavior := section.behaviors[name]
output += behavior.ToString(indent + 1)
if section.kind == FaceKindType {
for _, name := range sortMapKeysAlphabetically(section.behaviors) {
behavior := section.behaviors[name]
output += behavior.ToString(indent + 1)
}
} else if section.kind == FaceKindFunc {
for _, inputItem := range section.inputs {
output += doIndent(indent + 1, "> ", inputItem.ToString(), "\n")
}
for _, outputItem := range section.outputs {
output += doIndent(indent + 1, "< ", outputItem.ToString(), "\n")
}
}
return
}
@@ -396,22 +269,111 @@ func (behavior FaceBehavior) ToString (indent int) (output string) {
output += doIndent(indent, behavior.name, "\n")
for _, inputItem := range behavior.inputs {
output += doIndent(indent + 1, "> ", inputItem.ToString(indent), "\n")
output += doIndent(indent + 1, "> ", inputItem.ToString(), "\n")
}
for _, outputItem := range behavior.outputs {
output += doIndent(indent + 1, "< ", outputItem.ToString(indent), "\n")
output += doIndent(indent + 1, "< ", outputItem.ToString(), "\n")
}
return
}
func (dereference Dereference) ToString (indent int) (output string) {
output += "{"
output += dereference.argument.ToString(indent, false)
if dereference.offset != 0 {
output += fmt.Sprint(" ", dereference.offset)
}
output += "}"
return
}
func (phrase Phrase) ToString (indent int, ownLine bool) (output string) {
if ownLine {
output += doIndent(indent)
}
output += "[" + phrase.command.ToString(0, false)
output += "["
switch phrase.kind {
case PhraseKindCase:
output += ":"
case PhraseKindAssign:
output += "="
case PhraseKindOperator:
switch phrase.operator {
case lexer.TokenKindColon:
output += ":"
case lexer.TokenKindPlus:
output += "+"
case lexer.TokenKindMinus:
output += "-"
case lexer.TokenKindIncrement:
output += "++"
case lexer.TokenKindDecrement:
output += "--"
case lexer.TokenKindAsterisk:
output += "*"
case lexer.TokenKindSlash:
output += "/"
case lexer.TokenKindExclamation:
output += "!"
case lexer.TokenKindPercent:
output += "%"
case lexer.TokenKindPercentAssignment:
output += "%="
case lexer.TokenKindTilde:
output += "~"
case lexer.TokenKindTildeAssignment:
output += "~="
case lexer.TokenKindAssignment:
output += "="
case lexer.TokenKindEqualTo:
output += "=="
case lexer.TokenKindNotEqualTo:
output += "!="
case lexer.TokenKindLessThanEqualTo:
output += "<="
case lexer.TokenKindLessThan:
output += "<"
case lexer.TokenKindLShift:
output += "<<"
case lexer.TokenKindLShiftAssignment:
output += "<<="
case lexer.TokenKindGreaterThan:
output += ">"
case lexer.TokenKindGreaterThanEqualTo:
output += ">="
case lexer.TokenKindRShift:
output += ">>"
case lexer.TokenKindRShiftAssignment:
output += ">>="
case lexer.TokenKindBinaryOr:
output += "|"
case lexer.TokenKindBinaryOrAssignment:
output += "|="
case lexer.TokenKindLogicalOr:
output += "||"
case lexer.TokenKindBinaryAnd:
output += "&"
case lexer.TokenKindBinaryAndAssignment:
output += "&="
case lexer.TokenKindLogicalAnd:
output += "&&"
case lexer.TokenKindBinaryXor:
output += "^"
case lexer.TokenKindBinaryXorAssignment:
output += "^="
}
default:
output += phrase.command.ToString(0, false)
}
for _, argument := range phrase.arguments {
output += " " + argument.ToString(0, false)
}
@@ -433,6 +395,17 @@ func (phrase Phrase) ToString (indent int, ownLine bool) (output string) {
return
}
func (funcOutput FuncOutput) ToString (indent int) (output string) {
output += doIndent(indent + 1)
output += "< " + funcOutput.Declaration.ToString()
if funcOutput.argument.kind != ArgumentKindNil {
output += " " + funcOutput.argument.ToString(indent, false)
}
output += "\n"
return
}
func (block Block) ToString (indent int) (output string) {
for _, phrase := range block {
output += phrase.ToString(indent, true)
@@ -451,15 +424,15 @@ func (section FuncSection) ToString (indent int) (output string) {
if section.receiver != nil {
output += doIndent (
indent + 1,
"@ ", section.receiver.ToString(indent), "\n")
"@ ", section.receiver.ToString(), "\n")
}
for _, inputItem := range section.inputs {
output += doIndent(indent + 1, "> ", inputItem.ToString(indent), "\n")
output += doIndent(indent + 1, "> ", inputItem.ToString(), "\n")
}
for _, outputItem := range section.outputs {
output += doIndent(indent + 1, "< ", outputItem.ToString(indent), "\n")
output += outputItem.ToString(indent)
}
output += doIndent(indent + 1, "---\n")

View File

@@ -2,6 +2,7 @@ package parser
import "git.tebibyte.media/arf/arf/file"
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
// SyntaxTree represents an abstract syntax tree. It covers an entire module. It
@@ -35,9 +36,12 @@ type Identifier struct {
type TypeKind int
const (
// TypeKindNil means that the type is unspecified.
TypeKindNil TypeKind = iota
// TypeKindBasic means its a normal type and inherits from something.
// Basic types can define new members on their parent types.
TypeKindBasic TypeKind = iota
TypeKindBasic
// TypeKindPointer means it's a pointer.
TypeKindPointer
@@ -46,16 +50,6 @@ const (
TypeKindVariableArray
)
// TypeMember represents a member variable of a type specifier.
type TypeMember struct {
locatable
nameable
typeable
permissionable
bitWidth uint64
}
// Type represents a type specifier
type Type struct {
locatable
@@ -69,12 +63,6 @@ type Type struct {
// not applicable for basic.
points *Type
// if non-nil, this type defines new members.
members []TypeMember
// the default value of the type.
defaultValue Argument
}
// Declaration represents a variable declaration.
@@ -84,12 +72,14 @@ type Declaration struct {
typeable
}
// ObjectDefaultValues represents a list of object member initialization
// attributes.
type ObjectDefaultValues map[string] Argument
// List represents an array or object literal.
type List struct {
locatable
// ArrayDefaultValues represents a list of elements initializing an array.
type ArrayDefaultValues []Argument
// TODO: have an array of unnamed arguments, and a map of named
// arguments
multiValuable
}
// ArgumentKind specifies the type of thing the value of an argument should be
// cast to.
@@ -103,18 +93,12 @@ const (
// etc...
ArgumentKindPhrase
// (argument argument argument)
ArgumentKindList
// {name}
ArgumentKindDereference
// {name 23}
ArgumentKindSubscript
// (.name <value>)
// (.name <value> .name (.name <value))
ArgumentKindObjectDefaultValues
// <4 32 98 5>
ArgumentKindArrayDefaultValues
ArgumentKindDereference
// name.name
// name.name.name
@@ -137,15 +121,8 @@ const (
// 0.44
ArgumentKindFloat
// "hello world"
// 'hello world'
ArgumentKindString
// 'S'
ArgumentKindRune
// + - * / etc...
// this is only used as a phrase command
ArgumentKindOperator
)
// Argument represents a value that can be placed anywhere a value goes. This
@@ -164,16 +141,32 @@ type DataSection struct {
nameable
typeable
permissionable
valuable
external bool
}
// TypeSectionMember represents a member variable of a type section.
type TypeSectionMember struct {
locatable
nameable
typeable
permissionable
valuable
bitWidth uint64
}
// TypeSection represents a type definition.
type TypeSection struct {
locatable
nameable
typeable
permissionable
valuable
// if non-nil, this type defines new members.
members []TypeSectionMember
}
// EnumMember represents a member of an enum section.
@@ -193,6 +186,16 @@ type EnumSection struct {
members []EnumMember
}
// FaceKind determines if an interface is a type interface or an function
// interface.
type FaceKind int
const (
FaceKindEmpty FaceKind = iota
FaceKindType
FaceKindFunc
)
// FaceBehavior represents a behavior of an interface section.
type FaceBehavior struct {
locatable
@@ -208,8 +211,20 @@ type FaceSection struct {
nameable
permissionable
inherits Identifier
kind FaceKind
behaviors map[string] FaceBehavior
FaceBehavior
}
// Dereference represents a pointer dereference or array subscript.
type Dereference struct {
locatable
valuable
// if a simple dereference was parsed, this should just be zero.
offset uint64
}
// PhraseKind determines what semantic role a phrase plays.
@@ -219,9 +234,9 @@ const (
PhraseKindCall = iota
PhraseKindCallExternal
PhraseKindOperator
PhraseKindLet
PhraseKindAssign
PhraseKindReference
PhraseKindCast
PhraseKindDefer
PhraseKindIf
PhraseKindElseIf
@@ -236,12 +251,18 @@ const (
// syntactical concept.
type Phrase struct {
location file.Location
command Argument
arguments []Argument
returnees []Argument
multiValuable
kind PhraseKind
// TODO: do not have this be an argument. make a string version, and
// and identifier version.
command Argument
// only applicable for PhraseKindOperator
operator lexer.TokenKind
// only applicable for control flow phrases
block Block
}
@@ -249,6 +270,13 @@ type Phrase struct {
// Block represents a scoped/indented block of code.
type Block []Phrase
// FuncOutput represents a function output declaration. It allows for a default
// value.
type FuncOutput struct {
Declaration
valuable
}
// FuncSection represents a function section.
type FuncSection struct {
locatable
@@ -257,7 +285,7 @@ type FuncSection struct {
receiver *Declaration
inputs []Declaration
outputs []Declaration
outputs []FuncOutput
root Block
external bool

View File

@@ -2,13 +2,13 @@ package parser
import "git.tebibyte.media/arf/arf/lexer"
import "git.tebibyte.media/arf/arf/infoerr"
import "git.tebibyte.media/arf/arf/types"
// parseType parses a type notation of the form Name, {Name}, etc.
func (parser *ParsingOperation) parseType () (what Type, err error) {
func (parser *parsingOperation) parseType () (what Type, err error) {
err = parser.expect(lexer.TokenKindName, lexer.TokenKindLBrace)
if err != nil { return }
what.location = parser.token.Location()
what.kind = TypeKindBasic
if parser.token.Is(lexer.TokenKindLBrace) {
what.kind = TypeKindPointer
@@ -48,11 +48,7 @@ func (parser *ParsingOperation) parseType () (what Type, err error) {
err = parser.skipWhitespace()
if err != nil { return }
err = parser.expect(
lexer.TokenKindName,
lexer.TokenKindUInt,
lexer.TokenKindLParen,
lexer.TokenKindLessThan)
err = parser.expect(lexer.TokenKindName, lexer.TokenKindUInt)
if err != nil { return }
if parser.token.Is(lexer.TokenKindName) {
@@ -77,242 +73,8 @@ func (parser *ParsingOperation) parseType () (what Type, err error) {
err = parser.nextToken()
if err != nil { return }
} else if parser.token.Is(lexer.TokenKindLessThan) {
// parse default value
what.defaultValue, err = parser.parseBasicDefaultValue()
if err != nil { return }
} else if parser.token.Is(lexer.TokenKindLParen) {
// parse members and member default values
what.defaultValue,
what.members,
err = parser.parseObjectDefaultValueAndMembers()
if err != nil { return }
}
}
return
}
// parseBasicDefaultValue parses a default value of a non-object type.
func (parser *ParsingOperation) parseBasicDefaultValue () (
value Argument,
err error,
) {
value.location = parser.token.Location()
err = parser.expect(lexer.TokenKindLessThan)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
var attributes []Argument
defer func () {
// if we have multiple values, we need to return the full array
// instead.
if len(attributes) > 1 {
value.kind = ArgumentKindArrayDefaultValues
value.value = ArrayDefaultValues(attributes)
}
} ()
for {
err = parser.skipWhitespace()
if err != nil { return }
if parser.token.Is(lexer.TokenKindGreaterThan) { break }
value, err = parser.parseArgument()
if err != nil { return }
attributes = append(attributes, value)
}
err = parser.nextToken()
if err != nil { return }
return
}
// parseObjectDefaultValueAndMembers parses default values and new members of an
// object type.
func (parser *ParsingOperation) parseObjectDefaultValueAndMembers () (
value Argument,
members []TypeMember,
err error,
) {
value.location = parser.token.Location()
err = parser.expect(lexer.TokenKindLParen)
if err != nil { return }
parser.nextToken()
if err != nil { return }
var attributes ObjectDefaultValues
for {
err = parser.skipWhitespace()
if err != nil { return }
if parser.token.Is(lexer.TokenKindRParen) { break }
err = parser.expect(lexer.TokenKindDot)
if err != nil { return }
parser.nextToken(lexer.TokenKindName, lexer.TokenKindPermission)
if err != nil { return }
if parser.token.Is(lexer.TokenKindName) {
// parsing a defalut value for an inherited member
var memberName string
var memberValue Argument
memberName,
memberValue, err = parser.parseObjectInheritedMember()
if err != nil { return }
if value.kind == ArgumentKindNil {
// create default value map if it doesn't
// already exist
value.kind = ArgumentKindObjectDefaultValues
attributes = make(ObjectDefaultValues)
value.value = attributes
}
// TODO: error on duplicate
if memberValue.kind != ArgumentKindNil {
attributes[memberName] = memberValue
}
} else if parser.token.Is(lexer.TokenKindPermission) {
// parsing a member declaration
var member TypeMember
member, err = parser.parseObjectNewMember()
// TODO: error on duplicate
members = append(members, member)
if err != nil { return }
}
}
err = parser.nextToken()
if err != nil { return }
return
}
// parseObjectDefaultValue parses member default values only, and will throw an
// error when it encounteres a new member definition.
func (parser *ParsingOperation) parseObjectDefaultValue () (
value Argument,
err error,
) {
value.location = parser.token.Location()
err = parser.expect(lexer.TokenKindLParen)
if err != nil { return }
parser.nextToken()
if err != nil { return }
var attributes ObjectDefaultValues
for {
err = parser.skipWhitespace()
if err != nil { return }
if parser.token.Is(lexer.TokenKindRParen) { break }
err = parser.expect(lexer.TokenKindDot)
if err != nil { return }
parser.nextToken(lexer.TokenKindName)
if err != nil { return }
if value.kind == ArgumentKindNil {
value.kind = ArgumentKindObjectDefaultValues
attributes = make(ObjectDefaultValues)
value.value = attributes
}
var memberName string
var memberValue Argument
memberName,
memberValue, err = parser.parseObjectInheritedMember()
attributes[memberName] = memberValue
}
err = parser.nextToken()
if err != nil { return }
return
}
// .name:<value>
// parseObjectInheritedMember parses a new default value for an inherited
// member.
func (parser *ParsingOperation) parseObjectInheritedMember () (
name string,
value Argument,
err error,
) {
// get the name of the inherited member
err = parser.expect(lexer.TokenKindName)
value.location = parser.token.Location()
if err != nil { return }
name = parser.token.Value().(string)
// we require a default value to be present
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken(lexer.TokenKindLParen, lexer.TokenKindLessThan)
if err != nil { return }
if parser.token.Is(lexer.TokenKindLessThan) {
// parse default value
value, err = parser.parseBasicDefaultValue()
if err != nil { return }
} else if parser.token.Is(lexer.TokenKindLParen) {
// parse member default values
value, err = parser.parseObjectDefaultValue()
if err != nil { return }
}
return
}
// .ro name:Type:qualifier:<value>
// parseObjectNewMember parses an object member declaration, and its
// default value if it exists.
func (parser *ParsingOperation) parseObjectNewMember () (
member TypeMember,
err error,
) {
// get member permission
err = parser.expect(lexer.TokenKindPermission)
member.location = parser.token.Location()
if err != nil { return }
member.permission = parser.token.Value().(types.Permission)
// get member name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
member.name = parser.token.Value().(string)
// get type
err = parser.nextToken(lexer.TokenKindColon)
if err != nil { return }
err = parser.nextToken(lexer.TokenKindName, lexer.TokenKindLBrace)
if err != nil { return }
member.what, err = parser.parseType()
if err != nil { return }
// get bit width
if parser.token.Is(lexer.TokenKindBinaryAnd) {
err = parser.nextToken(lexer.TokenKindUInt)
if err != nil { return }
member.bitWidth = parser.token.Value().(uint64)
err = parser.nextToken()
if err != nil { return }
}
return
}

View File

@@ -3,9 +3,9 @@ package parser
import "git.tebibyte.media/arf/arf/types"
import "git.tebibyte.media/arf/arf/lexer"
// parseTypeSection parses a blind type definition, meaning it can inherit from
// anything including primitives, but cannot define structure.
func (parser *ParsingOperation) parseTypeSection () (
// parseTypeSection parses a type definition. It can inherit from other types,
// and define new members on them.
func (parser *parsingOperation) parseTypeSection () (
section TypeSection,
err error,
) {
@@ -32,10 +32,105 @@ func (parser *ParsingOperation) parseTypeSection () (
section.what, err = parser.parseType()
if err != nil { return }
parser.expect(lexer.TokenKindNewline)
// see if value exists
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
// if we have exited the section, return
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
err = parser.nextToken()
if err != nil { return }
}
// if we have not encountered members, get value and return.
if !parser.token.Is(lexer.TokenKindPermission) {
section.argument, err = parser.parseArgument()
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
return
}
parser.previousToken()
for {
// if we have exited the section, return
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 1 { return }
err = parser.nextToken(lexer.TokenKindPermission)
if err != nil { return }
var member TypeSectionMember
member, err = parser.parseTypeSectionMember()
section.members = append(section.members, member)
if err != nil { return }
}
}
// parseTypeSectionMember parses a type section member variable.
func (parser *parsingOperation) parseTypeSectionMember () (
member TypeSectionMember,
err error,
) {
// get permission
err = parser.expect(lexer.TokenKindPermission)
if err != nil { return }
member.permission = parser.token.Value().(types.Permission)
member.location = parser.token.Location()
// get name
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
member.name = parser.token.Value().(string)
// if there is a type, get it
err = parser.nextToken()
if err != nil { return }
if parser.token.Is(lexer.TokenKindColon) {
err = parser.nextToken(lexer.TokenKindName)
if err != nil { return }
member.what, err = parser.parseType()
if err != nil { return }
}
// see if value exists
if parser.token.Is(lexer.TokenKindNewline) {
err = parser.nextToken()
if err != nil { return }
// if we have exited the member, return
if !parser.token.Is(lexer.TokenKindIndent) { return }
if parser.token.Value().(int) != 2 { return }
err = parser.nextToken()
if err != nil { return }
}
// if default value exists, get it
if !parser.token.Is(lexer.TokenKindBinaryAnd) {
member.argument, err = parser.parseArgument()
}
// if there is a bit width specifier, get it
if parser.token.Is(lexer.TokenKindBinaryAnd) {
err = parser.nextToken(lexer.TokenKindUInt)
if err != nil { return }
member.bitWidth = parser.token.Value().(uint64)
err = parser.nextToken()
if err != nil { return }
}
err = parser.expect(lexer.TokenKindNewline)
if err != nil { return }
err = parser.nextToken()
if err != nil { return }
return
}

View File

@@ -6,55 +6,35 @@ func TestType (test *testing.T) {
checkTree ("../tests/parser/type", false,
`:arf
---
type ro aBasic:Obj:
(
.ro that:Int
.ro this:Int
)
type ro bBitFields:Obj:
(
.ro that:Int & 1
.ro this:Int:<298> & 24
)
type ro cInit:Obj:
(
.ro that:String:<"hello world">
.ro this:Int:<23>
)
type ro dInitInherit:aBasic:
(
.that:<9384>
.this:<389>
)
type ro eInitAndDefine:aBasic:
(
.ro these:aBasic:
(
.ro born:Int:<4>
.ro in:Int
.ro the:Int:3:
<
9348
92384
92834
>
):
(
.this:<98>
)
):
(
.that:<9384>
.this:<389>
)
type ro aBasic:Obj
ro that:Int
ro this:Int
type ro bBitFields:Obj
ro that:Int & 1
ro this:Int 298 & 24
type ro cInit:Obj
ro that:String 'hello world'
ro this:Int 23
type ro dInitInherit:aBasic
ro that 9384
ro this 389
type ro eInitAndDefine:aBasic
ro this 389
ro that 9384
ro born:Int 4
ro in:Int
ro the:Int:3 (9348 92384 92834)
ro walls:String 'live in the walls, die in the walls.'
type ro fBasic:Int
type ro gBasicInit:Int:<6>
type ro gBasicInit:Int
6
type ro hIntArray:{Int ..}
type ro iIntArrayInit:Int:3:
<
type ro iIntArrayInit:Int:3
(
3298
923
92
>
)
type ro jAtEnd:Int
`, test)
}

View File

@@ -0,0 +1,10 @@
:arf
---
data ro aBasicInt:Int 5
data ro bRune:Int 'A'
data ro cString:String 'A very large bird'
data ro dCharBuffer:U8:32 'A very large bird\000'

View File

@@ -0,0 +1,24 @@
:arf
require '../typeSection'
---
enum ro aWeekday:Int
- sunday
- monday
- tuesday
- wednesday
- thursday
- friday
- saturday
type ro bColor:U32
enum ro cNamedColor:bColor
- red 0xFF0000
- green 0x00FF00
- blue 0x0000FF
enum ro dFromFarAway:typeSection.dInheritFromOther
- bird
- bread 4

View File

@@ -0,0 +1,8 @@
:arf
---
type ro aCString:{U8}
func ro bArbitrary
---
'puts' [cast 'hellorld\000' aCString]

View File

@@ -1,4 +1,26 @@
:arf
require './required'
---
type basicInt:Int:<5>
type ro aBasicInt:Int 5
type ro bOnBasicInt:aBasicInt
type ro cBasicObject:Obj
ro that:UInt
ro this:Int
type ro dInheritFromOther:required.aBasic
type ro eInheritObject:cBasicObject
ro that 5
type ro fInheritObjectFromOther:required.bBird
ro wing 2
ro beak:Int 238
type ro gPointer:{Int}
type ro hDynamicArray:{Int ..}
# TODO: test a type that has a member pointing to itself

View File

@@ -0,0 +1,6 @@
:arf
---
type ro aBasic:Int
type ro bBird:Obj
rw wing:Int 2

View File

@@ -1,3 +1,3 @@
:arf
--- rw -> -349820394 932748397 239485.37520 "hello world!\n" 'E' helloWorld:.,..()[]{}
--- rw -> -349820394 932748397 239485.37520 'hello world!\n' 'E' helloWorld:.,..()[]{}
+ - ++ -- * / @ ! % %= ~ ~= = == != < <= << <<= > >= >> >>= | |= || & &= && ^ ^=

View File

@@ -1,2 +0,0 @@
:arf
'aaaaaaa'

View File

@@ -1,2 +1,2 @@
:arf
"\g"
'\g'

View File

@@ -1,4 +1,4 @@
:arf
"hello world!\a\b\f\n\r\t\v\'\"\\"
'\a' '\b' '\f' '\n' '\r' '\t' '\v' '\'' '\"' '\\'
"hello world \x40\u0040\U00000040!"
'hello world!\a\b\f\n\r\t\v\'\\'
'\a' '\b' '\f' '\n' '\r' '\t' '\v' '\'' '\\'
'hello world \x40\u0040\U00000040!'

View File

@@ -1,9 +1,9 @@
:arf
---
data ro aInteger:Int:<3202>
data ro aInteger:Int 3202
data ro bMutInteger:Int:mut:<3202>
data ro bMutInteger:Int:mut 3202
data ro cIntegerPointer:{Int}
@@ -13,33 +13,30 @@ data ro eIntegerArray16:Int:16
data ro fIntegerArrayVariable:{Int ..}
data ro gIntegerArrayInitialized:Int:16:<
3948 293 293049 948 912
340 0 2304 0 4785 92
>
data ro gIntegerArrayInitialized:Int:16
(3948 293 293049 948 912
340 0 2304 0 4785 92)
data rw hIntegerPointerInit:{Int}:<[& integer]>
data rw hIntegerPointerInit:{Int} [& integer]
data rw iMutIntegerPointerInit:{Int}:mut:<[& integer]>
data rw iMutIntegerPointerInit:{Int}:mut
[& integer]
data ro jObject:Obj:(
.this:<324>
.that:<324>)
data ro jObject:Obj
(324
438)
data ro kNestedObject:Obj:(
.this:(
.bird0:<324>
.bird1:<"hello world">)
.ro newMember:Int:<9023>
.that:(
.bird2:<123.8439>
.bird3:<9328.21348239>))
# TODO: at some point, have this syntax for object literals. terminate members
# with newlines.
# data ro jObject:Bird (
# .this 324
# .that 438)
data ro lMutIntegerArray16:Int:16:mut
data ro mExternalData:Int:8
external
data ro nIntegerArrayInitialized:Int:16:mut:
<3948 293 293049 948 912
340 0 2304 0 4785 92>
data ro nIntegerArrayInitialized:Int:16:mut
(3948 293 293049 948 912
340 0 2304 0 4785 92)

View File

@@ -11,26 +11,26 @@ enum ro Weekday:Int
- saturday
enum ro NamedColor:U32
- red: <0xFF0000>
- green: <0x00FF00>
- blue: <0x0000FF>
- red 0xFF0000
- green 0x00FF00
- blue 0x0000FF
enum ro AffrontToGod:Int:4
- bird0:
<28394 9328
398 9>
- bird1:
<23 932832
- bird0
(28394 9328
398 9)
- bird1
(23 932832
398
2349>
- bird2:
<1
2349)
- bird2
(1
2
3
4>
4)
enum ro ThisIsTerrible:Obj:(.rw x:Int .rw y:Int)
- up: (.x:< 0> .y:<-1>)
- down: (.x:< 0> .y:< 1>)
- left: (.x:<-1> .y:< 0>)
- right: (.x:< 1> .y:< 0>)
enum ro ThisIsTerrible:Vector
- up ( 0 -1)
- down ( 0 1)
- left (-1 0)
- right ( 1 0)

View File

@@ -1,7 +1,7 @@
:arf
---
face ro ReadWriter:Face
face ro aReadWriter:Face
write
> data:{Byte ..}
< wrote:Int
@@ -11,5 +11,10 @@ face ro ReadWriter:Face
< read:Int
< err:Error
face ro Destroyer:Face
face ro bDestroyer:Face
destroy
face ro cFuncInterface:Func
> something:Int
< someOutput:Int
< otherOutput:String

View File

@@ -5,11 +5,11 @@ require "io"
---
# this is a global variable
data pv helloText:String "Hello, world!"
data pv helloText:String 'Hello, world!'
# this is a struct definition
objt ro Greeter:Obj
rw text:String "Hi."
rw text:String 'Hi.'
# this is a function
func ro main

View File

@@ -2,14 +2,14 @@
---
func ro aBasicExternal
> someInput:Int:mut
< someOutput:Int:<4>
< someOutput:Int 4
---
external
func ro bMethod
@ bird:{Bird}
> someInput:Int:mut
< someOutput:Int:<4>
< someOutput:Int 4
---
external
@@ -27,7 +27,7 @@ func ro cBasicPhrases
func ro dArgumentTypes
---
[bird tree butterfly.wing "hello world"
[bird tree butterfly.wing 'hello world'
grass:Int:mut:8]
func ro eMath
@@ -122,14 +122,22 @@ func ro gControlFlow
else
otherThing
func ro hSetPhrase
func ro hDataInit
---
let x:Int:<3>
= x:Int 3
# loc is a reference, similar to * in C
let y:{Int}:<[loc x]>
let z:Int:8:
<398 9 2309 983 -2387
478 555 123>
let bird:Bird:(
.that:(.whenYou:<99999>)
.this:<324>)
= y:{Int} [loc x]
= z:Int:8 (398 9 2309 983 -2387
478 555 123)
= bird:Bird (
(99999)
324)
func ro iDereference
> x:{Int}
> y:{Int ..}
> z:Int:4
---
= b:Int {x}
= c:Int {y 4}
= d:Int {z 3}

View File

@@ -1,7 +1,7 @@
:arf
author "Sasha Koshka"
license "GPLv3"
require "./some/local/module"
require "/some/absolute/path/to/someModule"
require "someLibraryInstalledInStandardLocation"
author 'Sasha Koshka'
license 'GPLv3'
require './some/local/module'
require '/some/absolute/path/to/someModule'
require 'someLibraryInstalledInStandardLocation'
---

View File

@@ -6,37 +6,33 @@ data ro aExternalData:Int
data ro bSingleValue:Int 342
data ro cNestedObject:Obj
-- this
-- bird0 324
-- bird1 "hello world"
-- that
-- bird2 123.8439
-- bird3 9328.21348239
data ro cNestedObject:Obj (
(324 'hello world')
(123.8439 9328.21348239)
)
data ro dUninitialized:Int:16:mut
data ro eIntegerArrayInitialized:Int:16:mut
3948 293 293049 948 912
340 0 2304 0 4785 92
(3948 293 293049 948 912
340 0 2304 0 4785 92)
func ro fComplexFunction
---
= x:Int 3
= y:{Int} [loc x]
= z:Int:8
= y:{Int
} [loc x
]
= z:Int:8 (
398 9 2309 983 -2387
478 555 123
= bird:Bird
-- that
-- whenYou 99999
-- this 324
478 555 123)
= bird:Bird ((99999) 324)
func ro gExternalFunction
> x:Int
< arr:Int 5
34908
(34908
39 3498
38 219
38 219)
---
external

View File

@@ -1,35 +1,38 @@
:arf
---
type ro aBasic:Obj:(
.ro that:Int
.ro this:Int)
type ro aBasic:Obj
ro that:Int
ro this:Int
type ro bBitFields:Obj:(
.ro that:Int & 1
.ro this:Int:<298> & 24)
type ro bBitFields:Obj
ro that:Int & 1
ro this:Int 298 & 24
type ro cInit:Obj:(
.ro that:String:<"hello world">
.ro this:Int:<23>)
type ro cInit:Obj
ro that:String 'hello world'
ro this:Int 23
type ro dInitInherit:aBasic:(
.that:<9384>
.this:<389>)
# the semantic analyzer should let these sections restrict the permissions of
# inherited members, but it should not let the sections lessen the permissions.
type ro dInitInherit:aBasic
ro that 9384
ro this 389
type ro eInitAndDefine:aBasic:(
.this:<389>
.ro these:aBasic:(
.ro born:Int:<4>
.ro in:Int
.ro the:Int:3:<9348 92384 92834>
.this:<98>)
.that:<9384>)
type ro eInitAndDefine:aBasic
ro this 389
ro that 9384
ro born:Int 4
ro in:Int
ro the:Int:3 (9348 92384 92834)
ro walls:String 'live in the walls, die in the walls.'
type ro fBasic:Int
type ro gBasicInit:Int:<6>
type ro gBasicInit:Int 6
type ro hIntArray:{Int ..}
type ro iIntArrayInit:Int:3:
<3298 923 92>
type ro iIntArrayInit:Int:3
(3298 923 92)
type ro jAtEnd:Int

11
translator/translator.go Normal file
View File

@@ -0,0 +1,11 @@
package translator
import "git.tebibyte.media/arf/arf/analyzer"
// Translate takes in a path to a module and an io.Writer, and outputs the
// corresponding C through the writer. The C code will import nothing and
// function as a standalone translation unit.
func Translate (modulePath string, output io.Writer) (err error) {
// TODO
return
}

View File

@@ -39,7 +39,7 @@ func (iterator Iterator[VALUE_TYPE]) Value () (value VALUE_TYPE) {
}
// Next advances the iterator by 1.
func (iterator Iterator[VALUE_TYPE]) Next () {
func (iterator *Iterator[VALUE_TYPE]) Next () {
iterator.index ++
}