I think it's more the patching thing that made "collect and replay inputs" less common.
Networked games have a "tickrate", just for the networking/state aspect. For example, Counter-Strike 2 has a 64Hz tickrate by default. They also typically have a fixed time interval for physics engines. Both of these should be completely independent of framerate, because that's jittery and unpredictable.
Indeed. As per this timing diagram, Denise accesses each 16-bit word of each bitplane sequentially. Any bitplanes you turn off, the more cycles available for blitter... or CPU!
Fun fact! The Amiga Workbench is 4 colour hires by default, because hires is impressively businessy... but 8 or 16 colour hires would lock out the CPU most of the time, as the chipset would have to dip into the 68000's even cycle RAM accesses and stall it. 4 colour hires lets the CPU (on a chipmem-only system) run at full speed!
While the 68000's registers are 32-bit, the data bus is 16 bit, the A1000, A2000 and A500 that defined the range had 16-bit fetching chipsets, they literally had 24-bit address buses. None of this says "32-bit". It can't be overlooked.
Many games crashed on the 32-bit clean A3000, A1200, A600, A4000 because programmers used the upper byte of addresses for their IQ or whatever. (Similar issues with ARM2 to ARM3 in Acorns, even RISC OS itself can be categorized into '26-bit' and '32-bit clean' varieties due to Acorn thinking the memory space ignores the upper 6 bits so they can store what they like there)
The competition before the Amiga's launch solidly called itself "8-bit". The next generation called itself "16-bit" to hype itself. Later machines touted their "32-bit"ness, and then came the Nintendo 64 and PSX on MIPS processors...
All the hedges you made, "don't look here, look there" can be reversed to emphasize the 16-bitness!
Does this say something about you? Did you come to the Amiga later in its life, e.g. 1991-1993, when 68020s/030s/040s were an option? Or were you there in 1985 when it debuted?
The Opteron had a 32 bit HyperTransport bus. Modern CPUs only implement 48 address lines. And yet we’d call all of those 64 bit systems. We wouldn’t call them 32 bit systems, and surely not 48 bit.
The 68k’s ISA is 32 bit through and through, however the underlying implementation looks. It did since I bought my A1000, marketed as a 32 bit system, in 1985.
I'm sure there must have been some, but most of Commodore's early Amiga ads didn't mention the number of bits at all, and from looking through old magazines it doesn't seem most vendors did either.
I remember the Amiga always being compared to other "16-bit" machines, like the Apple IIgs, Atari ST, and early Macs.
I also remember the 68000 being referred to as 16/32-bit. Still, from a programmer perspective, the 68000 looked like a 32-bit machine, similar to what Intel did with the 386DX and SX.
The Mozilla Corporation then picks and chooses what it finances within the Mozilla Foundation. Their financial statements don't break down how they spend on software development within the Foundation, it only lists out employee salaries, specific directors' salaries and grants to outsiders... but it would seem Thunderbird doesn't get much if they're out begging.
- $66,396,000 from paid services (e.g Pocket, VPN) and advertisers
- $15,782,000 from donations
And it spent:
- $290,448,000 on programmer salaries
- $163,516,000 on manager salaries
- $36,358,000 on servers, cloud, etc.
- $20,258,000 on consultants (e.g. branding consultants)
- $9,573,000 on travel
- $2,192,000 on grants and fellowships
So overall, it didn't spent that much on the stupid doomed side projects! It spent a lot more on flying managers and marketing consultants to nice soirees.
But the real question, not answered by this financial report, is how much programming labour was spent on Thunderbird, versus other Mozilla projects?
My assumption would be that it's very little, given that Thunderbird was separated out of the Mozilla Corporation to MOZLA (or whatever it's called).
On the bright side, that actually makes me a bit keener about donating to it; donating to the Mozilla Corporation seems entirely pointless given donations make up ~2.5% of their income, and less than 10% of what they spend just on manager salaries, whereas giving it to Thunderbird might actually have a positive impact.
> MZLA TECHNOLOGIES CORPORATION share of total income: $10,760,074
So they don't break it down, but around 10 million went to the corporation that runs Thunderbird and other projects (versus 658 million to the one that runs the browser)
what kind of tools are needed for making these animations
They are motion-captured and/or they're animated by hand in your 3D editor, e.g. Blender
But much more likely is you won't be making animations, you'll be buying them (or getting them for free). There are many places you can buy these animations already, already rigged to a skeleton.
Some examples (I don't endorse them specifically):
skeletal animation. how are you supposed to think about this
Think about giving direction to an actor. You give high-level instructions to the animation system, and it picks the animation based on rules about what animation to use in what situation, that you already set up. It manages the transition to the next animation, all of which are animations of the skeleton, that the character model adapts to (including physics-based parts of the character like hair and cloth)
Generally speaking, you define animation cycles (e.g. walk cycle, run cycle), and then transition between two different animations that are in phase with each other, but it can be a lot more complicated in order to look more natural.
Unity has the Animation Controller. Unreal has "Motion Matching". Godot has Animation Trees.
do we need intermediate animations
If you want to, yes, but also you can have the game engine interpolate
You haven't even mentioned things like having the character's feet stand realisticly on non-level ground. For that you would use inverse kinematics, but not too much of it because it has a tendancy to go wonky
are there LLM tools
Yes but you'd be better off with animations someone has already created, they tend to look better. Many companies now offering AI-based 3D character generators too.
formats like obj, fbx, m3d, glb etc. the same data stored in these files in a slightly different way
They all have different purposes. You want glTF/glB (same format but in text vs binary) for most purposes
- so let us say for purposes of learning, i wanted to make an fps or a third person shooter (3D) without using unreal, unity, godot or any popular engine out there
- what does the process look like roughly?
- i managed to get c++ running (programmer here with a decade of non gamedev experience) and also added raylib and looked into jolt physics
- got a 3d grid constructed, window created, character model added
- what would be my next bunch of steps?
- should i add animations for each of the player states like walk, run etc?
- should i program interactions like shoot, throw a grenade etc?
- or should I start working on enemy AI like pathfinding A* algorithm with state machine?
- trying to code cooperative mode here so i looked into c++ udp libraries like enet. I am assuming latency and game reconcillation algorithms would be step 1 if you want to build coop from ground up? basically create a server.cpp and a client.cpp and make the game loop work without crashing in cooperative mode on day 0. then worry about adding any interaction at all
- truly trying to comprehend at a high level what day 0 to day 1000 of a game looks like
You'd be committing the classic fallacy of "i'll just work on these tools, then make the game", which while a fun exercise, almost never results in a game being released.
Think about what your ultimate goal is:
- you want to make games: use an existing engine. don't bother with half of the features, focus on whether the game is fun or not. add polish (like character animation transitions) later. use stock assets to begin with.
- you want knowledge to work in games industry but not actually release a game yourself: learn all the bells and whistles of Unreal Engine
- you want to make things that are unlike regular games: develop your own code
- you don't ever intend to release a game, you just want to see how they're made: just read other people's code. Read the Quake engine source code and https://fabiensanglard.net/ as a companion site.
If you're talking about using raylib, that is also a game engine, just a simpler one. We can look in both directions; if this is an exercise exclusively for personal learning and development, why not also learn about what's done for you by that library and by the GPU, etc? Occlusion, rasterisation, depth buffering, perspective-correct texture mapping...
"the number one most important skill is how to keep a tangle of features from collapsing under the weight of its own complexity" https://prog21.dadgum.com/177.html
this is what game engines do - they abstract the essential complexity present in all games, and keep it from infecting the one-time object, your game.
If you want to learn about games, honestly, take a look at existing engines. Take a look at old engines like DOOM or Quake, or even http://cubeengine.com/ and http://sauerbraten.org/ (and their corresponding source code) -- they are very simple compared to modern FPS engines. The Cube engines render geometry using octrees rather than the traditional BSP or recursive portal approach.
I am assuming latency and game reconcillation algorithms would be step 1
Yes, if you intend to make a networked game, write your netcode first, share state with client(s) over a network protocol, even if the network is 127.0.0.0/8
Gamers have opinions about netcode, because it affects how they have to think in order to play the game, so netcode becomes as much a creative endeavour as the level design, graphics, etc.
Every area of endeavour you've mentioned is a fractal of timesuck. They all have their basics and then their advancements, that have been built up by thousands of people over decades.
If you are learning by doing, for god's sake, keep it simple. Make the simplest thing that works. If you're making an FPS, have static geometry and non-animated character models (a 2D sprite will do). Prioritise having the most basic thing working as your goal. Otherwise you will be off in the weeds for years and you'll probably give up having built nothing.
what day 0 to day 1000 of a game looks like
Pick a baseline (whether that's a game engine, or raw language) and then spend the rest of the time making the game: designing gameplay, levels, movement, interactivity, playtesting, feedback, placeholder art, real art... it's about standing on the shoulders of giants, not re-inventing the wheel, and putting your mind and creativity into the new thing, which is your game
You are forgiven for not knowing about the University of Leiden's Escher and the Droste effect site from 2002, given it shut down in 2024, but they were the first to try filling in the centre of Print Gallery and make the association with the cocoa tins
This is "the bomber will always get through" mentality for the modern era. You will invent air defences. You will write fewer bugs. You will leave code that doesn't have bugs alone, so it gains no more bugs. You will build software that finds bugs as easily as you think "enemies" find bugs, and you'll run it before you release your code.
What's the saying? Given many eyes, all bugs are shallow? Well, here are some more eyes.
Here's Super Mario Bros's demo replay data: https://gist.github.com/1wErt3r/4048722#file-smbdis-asm-L108...
21 bytes of joypad input and 21 bytes of input timings
reply