So here's my problem: ever since I updated from Win7 to Win10 my uTorrent started to look blurry, like it's in bad, low resolution. In both Win7 and Win10 I use DPI scaling (it's set on 125%) but only with Win10 the app seems to have issues with it.I tried following the advice from this page and it kind of works - I click the “Disable display scaling on high DPI settings” checkbox in the properties window, I apply the changes, turn on uTorrent and it looks fine. Unfortunately when I close the program, it somehow restarts the properties and the display scaling disable is unchecked. What's interesting is that in case of other programs the trick works flawlessly and the options don't restart.Is there any solution to my problem?PS.I'm using the newest version of uTorrent - 3.4.3 (build 40760).
Vuze (formerly Azureus) is a free BitTorrent client, which is used to transfer files via the BitTorrent protocol. The application is written in Java and uses the Azureus Engine.Vuze offers multiple torrent downloads, queuing/priority systems (on torrents and files), start/stop seeding options and instant access to numerous pieces of information about your torrents.In addition to BitTorrenting, Vuze allows you to view, publish and share original DVD and HD quality video content. You can see content through channels and categories containing TV shows, music videos, movies, video games and others.If you publish your own original content via the Vuze platform, you are able to charge for it.Please be advised: You must have Java installed to be able to run Vuze.
Table of Contents.Enable the embedded trackerqBittorrent's embedded tracker is disabled as a default to save resources. If you wish to use it, you need to enable it in the advanced preferences panel ('Tools - Options - Advanced - Enable embedded tracker').As a default, the tracker URL is:the tracker port on your routerIf you want your tracker to be accessible from Internet (i.e. Outside your LAN), you will probably have to configure your router to forward the tracker port to your machine (default: 9000).Once you have configure your router, you can test that your configuration is working by:. Running qBittorrent. Enabling the embedded tracker. Go to:. Type in your tracker port (e.g.
9000). Press the 'Check' button. You should get a position result or this means that your router is not properly configured.How to share files with my friends using qBittorrent?. Enable the embedded tracker in advanced preferences. Run the 'Torrent creation tool' in qBittorrent. Select the local files you wish to share.
Type in the URL of your local tracker in the tracker list ( You can check to know your public IP address. Check the 'Start seeding after creation' box.
Press the 'Create and save.' Button.
Save the torrent file wherever you like. Send this newly created.torrent file to your friends and they will be able the download the files you are sharing using any Bittorrent client.If you are using qBittorrent without XStop qBittorrent and add the following lines to /.config/qBittorrent/qBittorrent.conf:AdvancedtrackerEnabled=trueAdvancedtrackerPort=9000under the Preferences section.
Hi Folks, I am a experienced web developer who has recently moved into Embedded with the idea to make some IoT things. It was quite overwhelming, fascinating and I felt like I went 25 years back in time to when I used to hack in C and Assembly. One thing which still confuses hell out of me is user interface. I understand how to access display in it's most basic way (I once wrote Pong in Assembly), I know some very basic, low level libraries such as uGFX but those only allow very very very basic stuff, UIs made this way would either look like 80es interfaces or require enormous amount of work to make them good looking.
Is there anything in low end (Cortex M level) Embedded world which is kind of like Bootstrap for web? Citation management google docs. Nice looking widgets / buttons, scrollbars etc which I can apply to the screen? How about drawing charts? I saw some products such as TouchGFX and Embedded Wizard.
Are those products the only options I have if I want to make a nice GUI in resonable timeframe? If you want a really easy way in to embedded GUI development, Geoff Graham is currently in the late stages of beta testing the Micromite+ (firmware available on the Backshed forum). This is a PIC32MX470 loaded with a very sophisticated Basic interpreter that also allows calling compiled C routines where ultimate speed is required. The Micromite firmware includes support for a range of TFT displays including the SSD1963 and ILI9341 as well as support for their touch controllers and SDcards. I've attached a summary of the Basic commands for GUI development below.
In addition there are all the usual drawing primitives BOX, CIRCLE, etc. The combination of interpreted Basic with compiled C and extensive GUI support makes writing embedded applications very easy.
Minimum hardware is the processor chip, a crystal, a resistor and a few capacitors. Once the Micromite firmware is programmed onto the MX470 then Basic programming is via a standard TTL UART or there is also on-chip support for USB.
Don't be put off by Basic. This is an incredibly powerful development environment for embedded applications and it is all FREE!!!
Demo of some of this on The advanced graphics controls are: Frame GUI FRAME, caption$, StartX, StartY, Width, Height, Colour This will draw a frame which is a box with round corners and a caption. A frame does not respond to touch but is useful when a group of controls need to be visually brought together.
It can also used to surround a group of radio buttons and MMBasic will arrange for the radio buttons surrounded by the frame to be exclusive – that is, when one radio button is selected any other button that was selected and within the frame will be automatically deselected. LED GUI LED, caption$, CenterX, CenterY, Radius, Colour This will draw an indicator light (it looks like a panel mounted LED). When its value is set to a non zero number it will be illuminated and when it is set to zero it will be off (a dull version of its colour attribute).
The caption will be drawn to the right of the LED and will use the colours set by the COLOUR command. A LED does not respond to touch. Check Box GUI CHECKBOX, caption$, StartX, StartY, Size, Colour This will draw a check box which is a small box with a caption. Both the height and width are specified with the 'Size' parameter. When touched an X will be drawn inside the box to indicate that this option has been selected and the control's value will be set to 1.
When touched a second time the check mark will be removed and the control's value will be zero. The caption will be drawn to the right of the Check Box and will use the colours set by the COLOUR command. Push Button GUI BUTTON, caption$, StartX, StartY, Width, Height, FColour, BColour This will draw a momentary button which is a square switch with the caption on its face. When touched the visual image of the button will appear to be depressed and the control's value will be 1. When the touch is removed the value will revert to zero. Caption can be a single string with two captions separated by a character (eg, 'UP DOWN').
When the button is up the first string will be used and when pressed the second will be used Switch GUI SWITCH, caption$, StartX, StartY, Width, Height, FColour, BColour This will draw a latching switch with the caption on its face. When touched the visual image of the button will appear to be depressed and the control's value will be 1. When touched a second time the switch will be released and the value will revert to zero. Caption can be a single string with two captions separated by a character (eg, 'ON OFF'). When this is used the switch will appear to be a toggle switch with each half of the caption used to label each half of the toggle switch. Radio Button GUI RADIO, caption$, CenterX, CenterY, Radius, Colour This will draw a radio button with a caption.
When touched the centre of the button will be illuminated to indicate that this option has been selected and the control's value will be 1. When another radio button is selected the mark on this button will be removed and its value will be zero. Radio buttons are grouped together when surrounded by a frame and when one button in the group is selected all others in the group will be deselected. If a frame is not used all buttons on the screen will be grouped together.
The caption will be drawn to the right of the button and will use the colours set by the COLOUR command. Display Box GUI DISPLAYBOX, StartX, StartY, Width, Height, FColour, BColour This will draw a box with rounded corners. Any text can be displayed in the box by using the CtrlVal(r) = command. This is useful for displaying text, numbers and messages. This control does not respond to touch. Text Box GUI TEXTBOX, StartX, StartY, Width, Height, FColour, BColour This will draw a box with rounded corners.
When the box is touched a QWERTY keyboard will appear on the screen. Using this virtual keyboard any text can be entered into the box including upper/lower case letters, numbers and any other characters in the ASCII character set. The new text will replace any text previously in the box. The value of the control can set to a string starting with two hash characters (##) and in that case the string (without the leading two hash characters) will be displayed in the box with reduced brightness.
This can be used to give the user a hint as to what should be entered (called 'ghost text'). Reading the value of the control displaying ghost text will return an empty string. When the control is used normally the ghost text will vanish.
MMBasic will try to position the virtual keyboard on the screen so as to not obscure the text box that caused it to appear. A pen down interrupt will be generated when the keyboard is deployed and a key up interrupt will be generated when the Enter key is touched and the keyboard is hidden. Number Box GUI NUMBERBOX, StartX, StartY, Width, Height, FColour, BColour This will draw a box with rounded corners.
When the box is touched a numeric keypad will appear on the screen. Using this virtual keypad any number can be entered into the box including a floating point number in exponential format. The new number will replace the number previously in the box. Similar to the Text Box the value of the control can set to a literal string with two leading hash characters (eg, '##Hint') and in that case the string (without the leading two characters) will be displayed in the box with reduced brightness. Reading this will return zero and when the control is used normally the ghost text will vanish.
MMBasic will try to position the virtual keypad on the screen so as to not obscure the number box that caused it to appear. A pen down interrupt will be generated when the keypad is deployed and a key up interrupt will be generated when the Enter key is touched and the keypad is hidden. Also, when the Enter key is touched the entered number will be evaluated as a number and the NUMBERBOX control redrawn to display this number. Spin Box GUI SPINBOX, StartX, StartY, Width, Height, FColour, BColour, Step, Minimum, Maximum This will draw a box with up/down icons on either end. When these icons are touched the number in the box will be incremented or decremented by the 'StepValue', holding down the touch will repeat the step at a fast rate.
'Minimum' and 'Maximum' set a limit on the value that can be entered. 'StepValue', 'Minimum' and 'Maximum' are optional and if not specified 'StepValue' will be 1 and there will be no limit on the number entered. A pen down interrupt will be generated every time up/down is touched or when automatic repeat occurs. Caption GUI CAPTION, text$, StartX, StartY, Justify, FColour, BColour This will draw a text string on the screen. 'Justify' is one or two letters where the first letter is the horizontal justification around X and can be L, C or R for LEFT, CENTER, RIGHT and the second letter is the vertical placement around Y and can be T, M or B for TOP, MIDDLE, BOTTOM. The default justification is left/top.
This command is similar to the basic drawing command TEXT, the difference being that MMBasic will automatically dim this control if a keyboard or number pad is displayed. If the colours are not specified this control will use the colours set by the COLOUR command. Interacting with Controls Using the following commands and functions the characteristics of the on screen controls can be changed and their value retrieved.? = CTRLVAL(#ref) This is a function that will return the current value of a control. For controls like check boxes or switches it will be the number one (true) indicating that the control has been selected by the user or zero (false) if not.
For controls that hold a number (eg, a SPINBOX) the value will be the number (normally a floating point number). For controls that hold a string (eg, TEXTBOX) the value will be a string. For example: PRINT 'The number in the spin box is: ' CTRLVAL(#10)? CTRLVAL(#ref) = This command will set the value of a control.
For off/on controls like check boxes it will override any touch input and can be used to depress/release switches, tick/untick check boxes, etc. A value of zero is off or unchecked and non zero will turn the control on. For a LED it will cause the LED to be illuminated or turned off. It can also be used to set the initial value of spin boxes, text boxes, etc.
For example: CTRLVAL(#10) = 12.4? GUI FCOLOUR colour, 1 , 2, 3, etc This will change the foreground colour of the specified controls to 'colour'. This is especially handy for a LED which can change colour.? GUI BCOLOUR colour, 1 , 2, 3, etc This will change the background colour of the specified controls to 'colour'.?
= TOUCH(REF) This is a function that will return the reference number of the control currently being touched. If no control is currently being touched it will return zero.? = TOUCH(LASTREF) This is a function that will return the reference number of the control that was last touched.? GUI DISABLE 1 , 2, 3, etc This will disable the controls in the list.
Disabled controls do not respond to touch and will be displayed dimmed. The keyword ALL can be used as the argument and that will disable all controls. For example: GUI DISABLE ALL?
GUI HIDE 1 , 2, 3, etc This will hide the controls in the list. Hidden controls will not respond to touch and will be replaced on the screen with the current background colour. The keyword ALL can be used as the argument. GUI RESTORE 1 , 2, 3, etc This will undo the effects of GUI DISABLE or GUI HIDE and restore the controls in the list to full visibility and normal operation.
The keyword ALL can be used as the argument for all controls.? GUI REDRAW 1 , 2, 3, etc Will redraw the controls on the screen. It is useful if the screen image has been corrupted. The keyword ALL can be used as the argument for all controls.?
GUI DELETE 1 , 2, 3, etc This will delete the controls in the list. This includes removing the image of the control from the screen using the current background colour and freeing the memory used by the control. The keyword ALL can be used as the argument and that will cause all controls to be deleted.? GUI DEFAULT HIDDEN GUI DEFAULT SHOW This will set the default state of newly created controls to hidden (ie, they will not be displayed on the screen until GUI RESTORE is used). This is useful when creating controls that will be later made visible depending on the program logic. The SHOW setting can be used to restored the original behaviour. When I was wondering about the same thing a few years ago, I eventually concluded that there is no high level GUI lib that is suitable for me.
I was using an ARM926 chip and a small 160x128 LCD, but no dedicated graphics chip. The reasons I couldn't find an existing GUI lib are: - There is no standard look for embedded devices. Each has its own look and feel, which is often specified by your corporate design guidelines.
Interaction capabilities vary widely (capacitive touch screen? Maybe just a resistive touch screen where you can't swipe? Soft buttons? Physical controls?), as does screen resolution.
To get good performance, you can't abstract too much, you need to keep the hardware capabilities in mind. One simple example is the bit format for pixels in the frame buffer. It should be the same as what the display uses in the end (and there is no common standard among displays), otherwise the per-pixel format conversion is going to slow you down a lot, because often you don't have a graphics card that does it for you, you'd need to do it on the CPU. Note: if you go with a more capable platform, such as a GHz class ARM chip with embedded graphics, things are different. At that point, you're probably using an OS, 3rd party drivers, and there's much less diversity in the hardware.
Look at i.MX6 based boards for example, they're all quite similar from software point of view. I assume here that we're discussing something a bit smaller, without a standard graphics controller. Given that, the attempts at writing generic embedded GUI libs I've seen haven't impressed me. They're full of, and the abstraction layers break frequently because the underlying hardware lacks some capability that the abstraction layer requires.
Therefore, I would approach the problem like this: - pick a set of helper libraries that are hardware independent, and that you can easily port to a new platform. This would include: drawing geometric shapes into a framebuffer, and something like libpng to load images, also into the framebuffer.
Usually you have to add a little adapter for your specific pixel format, but it is a small addition. Write the framebuffer-hardware interface yourself. If you have the memory, push as much of the UI creation out of your code and into images as you can.
If you're memory limited, or that workflow doesn't work well for you, there are some compromises you can do, e.g. Drawing a button based on a 9-segment image. At this point, you can draw many types of GUI and you haven't invested that much time. A few hours maybe, a bit longer if you do it for the first time. That leaves capturing and acting on user input. I haven't really found a fast way to implement that yet, because the input mechanisms are so different from project to project.
It's always a bit of work. I considered writing a portable generic abstraction layer that takes hardware input and spews out something like mouse/touch/key events, but I quickly got the impression that with the hardware-specific adaption I still would have to do, such an abstraction layer wouldn't save me much time, but it would increase code complexity. When I was wondering about the same thing a few years ago, I eventually concluded that there is no high level GUI lib that is suitable for me. I was using an ARM926 chip and a small 160x128 LCD, but no dedicated graphics chip. The reasons I couldn't find an existing GUI lib are: - There is no standard look for embedded devices.
Each has its own look and feel, which is often specified by your corporate design guidelines. Interaction capabilities vary widely (capacitive touch screen? Maybe just a resistive touch screen where you can't swipe? Soft buttons? Physical controls?), as does screen resolution. To get good performance, you can't abstract too much, you need to keep the hardware capabilities in mind. One simple example is the bit format for pixels in the frame buffer.
It should be the same as what the display uses in the end (and there is no common standard among displays), otherwise the per-pixel format conversion is going to slow you down a lot, because often you don't have a graphics card that does it for you, you'd need to do it on the CPU. Note: if you go with a more capable platform, such as a GHz class ARM chip with embedded graphics, things are different. At that point, you're probably using an OS, 3rd party drivers, and there's much less diversity in the hardware. Look at i.MX6 based boards for example, they're all quite similar from software point of view. I assume here that we're discussing something a bit smaller, without a standard graphics controller. Given that, the attempts at writing generic embedded GUI libs I've seen haven't impressed me.
They're full of, and the abstraction layers break frequently because the underlying hardware lacks some capability that the abstraction layer requires. Therefore, I would approach the problem like this: - pick a set of helper libraries that are hardware independent, and that you can easily port to a new platform. This would include: drawing geometric shapes into a framebuffer, and something like libpng to load images, also into the framebuffer. Usually you have to add a little adapter for your specific pixel format, but it is a small addition. Write the framebuffer-hardware interface yourself.
If you have the memory, push as much of the UI creation out of your code and into images as you can. If you're memory limited, or that workflow doesn't work well for you, there are some compromises you can do, e.g. Drawing a button based on a 9-segment image. At this point, you can draw many types of GUI and you haven't invested that much time. A few hours maybe, a bit longer if you do it for the first time. That leaves capturing and acting on user input. I haven't really found a fast way to implement that yet, because the input mechanisms are so different from project to project.
It's always a bit of work. I considered writing a portable generic abstraction layer that takes hardware input and spews out something like mouse/touch/key events, but I quickly got the impression that with the hardware-specific adaption I still would have to do, such an abstraction layer wouldn't save me much time, but it would increase code complexity. I also like the post above about emWin. Keil has a pro-middleware with good graphics handling as well, but. I think the days of adding another screen to things is coming to an end. A lot of new popular IoT products aren't using screens (Amazon's microphone thing, GoPros, Withings scales, fitbits and lots of other stuffs). If you're starting today and looking to definitely include a screen, I'd tell you to take a step back and see if a user's pocket screen wouldn't be a better suit.
You can do a lot more with a phone/tablet connection than you can an on-board screen. GUI is more than writing graphical elements. GUI needs an event queue which will hold and forward the events as the result of user actions (button presses, screen tapping etc.) and device's state changes (charger connected/disconnected, timer timeouts, alarm start, alarm stop etc. A state machine (preferably hierarchical) will handle the events.
Specifically, hierarchical state machine privides a nice way to propagate the events automagically to the parent handler if the focused GUI element doesn't handle the event. For more information on the hierarchical state machines, see Creating a GUI for an embedded device can be a breeze or a nightmare, depending on the framework you are using. Dungeon siege free download full version pc torrent. If you want a good one, consider rolling your own. You need to be a skilled developer to efficiently program and maintain a GUI framework. It is also more expensive than buying emWin.
Unless you outsource it to some Asian country. There is also the new (chinese) Nextion, which offloads a lot from the application mcu. I build a small product with an M3 and Emwin. Without external memory. Which leaves you with only the basic draw functions.
Embedded Wizard Ui Torrent 2017
No fancy window managers because of the little ram and slow update of the serial lcd. On it uses little power, nut on the other side it's a hell of a job to program a black/white vector graphics parameter editor thing, from basically bare metal LCD routines. Wow, that's some input! Thanks everyone. I really like the Geoff Graham's UI, very clean and clear, will check, although I am a bit weary of Basic, not that I doubt it's functionality but I wish my code to be reasonably portable between MCUs. Nothing speaks against using the Basic as driver for HMI display.
What I understand so far is: There are few OSS and commercial options as far as I see, but regardless which way one takes, making UIs that go beyond simple buttons, menus and input fields seems to be much, much harder on Embedded, so, if one already makes connected devices it would be best to handle complex UIs via external application. I would not go as far as eliminate the screen entirely but leave it purely for status display as well as basic setup (Network for example). One more thing I wish to leave in this thread for people totally new to embedded: flashy and complex UIs require raw MCU power and fast connection to display (lots of io pins), both are precious resources one might not have on the 'main' MCU. So often displays are paired with dedicated MCUs to handle the UI, this pairing is called HMI Modules, intelligent displays etc.
Such displays have their own firmware, written by hand or created with UI design tools, those modules use serial protocols to communicate events with main MCU. So main MCU does not need to track touch events or send actual image of button to display. Sometimes, MCU in display is more powerful then actual 'main' MCU. This might sound obvious but it was not really clear to me at first. Is there a list of known libraries and solutions for UI on Embedded? Here is what I found: Basic graphics Vendor independent libraries that allow drawing of primitive shapes, text and pixels on the screen. There are many of those, some famous examples are:.
uGFX. u8glib. NuttX GFX subsystem. Others such as Adafruit etc. Widgets and Window Managers. NuttX widgets and WM, based on NuttX GFX subsystem, open source and free.
emWin, commercial, prices start at 5k EUR free to link against for NXP MCUs UI Design tools Allow visual design of UIs, often have embedded scripting languages to control UI behavior, compile UIs into byte code. Probably not. The software for displaying streaming video may be a problem. It's probably pretty simple in Linux on a Raspberry PI since you have access to all the tools and libraries. As to GUI, first you have to be able to draw stuff, then you have to be able to receive events against them, then you need to dispatch the event to a chain of handlers. In other words, a full windowing package.
I remember using the ZINC library about 25 years ago to create a very nice looking 'fill-in-the-form' application. There was a ton of code behind my simple GUI but I didn't have to write it! The thing is, the entire library started out as a) draw a dot, color the dot b) draw a line, c) make the line width adjustable, d) draw a rectangle, circle and ellipse, e) fill them, f) text them and so on up the hierarchy. First you crawl, then you walk, then you run. First you draw a single pixel dot.
Today, you use Raspberry PI and one of the graphics packages such as Qt or Tkinter, maybe using Python. I was playing with this a few months ago on a PC and it seemed pretty straightforward.
Microsoft's IoT stuff for Win 10 is pretty neat. They replace the OS on a Raspberry PI with their own abbreviated Win 10 and then use it as a remote device. I haven't gotten very far with this but I played with it earlier this year. The question re: where is the GUI is very interesting. There is a huge advantage to leveraging a cell phone for the GUI and leaving the details to the IoT device.
Google makes the Android SDK available so creating apps isn't all that hard. Apple makes something available as well but you have to register as a developer. Probably not. The software for displaying streaming video may be a problem.
Embedded Wizard Ui Torrent Pdf
It's probably pretty simple in Linux on a Raspberry PI since you have access to all the tools and libraries. As to GUI, first you have to be able to draw stuff, then you have to be able to receive events against them, then you need to dispatch the event to a chain of handlers. In other words, a full windowing package.
I remember using the ZINC library about 25 years ago to create a very nice looking 'fill-in-the-form' application. There was a ton of code behind my simple GUI but I didn't have to write it!
The thing is, the entire library started out as a) draw a dot, color the dot b) draw a line, c) make the line width adjustable, d) draw a rectangle, circle and ellipse, e) fill them, f) text them and so on up the hierarchy. First you crawl, then you walk, then you run. First you draw a single pixel dot. Today, you use Raspberry PI and one of the graphics packages such as Qt or Tkinter, maybe using Python. I was playing with this a few months ago on a PC and it seemed pretty straightforward. Microsoft's IoT stuff for Win 10 is pretty neat. They replace the OS on a Raspberry PI with their own abbreviated Win 10 and then use it as a remote device.
I haven't gotten very far with this but I played with it earlier this year. The question re: where is the GUI is very interesting.
There is a huge advantage to leveraging a cell phone for the GUI and leaving the details to the IoT device. Google makes the Android SDK available so creating apps isn't all that hard.
Apple makes something available as well but you have to register as a developer. Probably not. The software for displaying streaming video may be a problem. It's probably pretty simple in Linux on a Raspberry PI since you have access to all the tools and libraries. As to GUI, first you have to be able to draw stuff, then you have to be able to receive events against them, then you need to dispatch the event to a chain of handlers. In other words, a full windowing package.
I remember using the ZINC library about 25 years ago to create a very nice looking 'fill-in-the-form' application. There was a ton of code behind my simple GUI but I didn't have to write it! The thing is, the entire library started out as a) draw a dot, color the dot b) draw a line, c) make the line width adjustable, d) draw a rectangle, circle and ellipse, e) fill them, f) text them and so on up the hierarchy. First you crawl, then you walk, then you run. First you draw a single pixel dot. Today, you use Raspberry PI and one of the graphics packages such as Qt or Tkinter, maybe using Python.
I was playing with this a few months ago on a PC and it seemed pretty straightforward. Microsoft's IoT stuff for Win 10 is pretty neat. They replace the OS on a Raspberry PI with their own abbreviated Win 10 and then use it as a remote device. I haven't gotten very far with this but I played with it earlier this year. The question re: where is the GUI is very interesting. There is a huge advantage to leveraging a cell phone for the GUI and leaving the details to the IoT device.
Google makes the Android SDK available so creating apps isn't all that hard. Apple makes something available as well but you have to register as a developer. Wilcom free download. No, you can do the GUI wherever you want, with several thousand lines of code.
Draw a dot, draw a line, draw a rectangle, etc. Hopefully you can find a library. That's just drawing shapes on a screen. If you don't get too fancy, like 3D controls, the code may be a reasonable amount. Now you have to write code to detect an event and somehow dispatch it to the proper place in FreeRTOS. That wouldn't be hard if functions were waiting on events but generally, they aren't. Events are asynchronous.
And your code has to determine which form got the event, unless you only have one form. You have to cause some kind of event interrupt to propagate through a bunch of non-idle functions looking for a handler.
Add another couple of thousand lines of code. Forms overlaying forms?
Where do you store the non-displaying part of the screen, the part that's overwritten? Drop down menus? Buttons, knobs and dials?
I would try to do the UI on a much higher level system/board. Yes, I would use a Raspberry PI for the UI functionality and I would use it to display video - another huge undertaking to code from scratch. Then I would use a WiFi dongle to connect to the end device. That leave the STM32F doing what it can do best - toggling pins.
Of course, the Raspberry PI can do the same thing but it tends to be more limited than a dedicated microcontroller particularly in terms of pins and real-time response. FreeRTOS will be much more responsive simply because there isn't as much code involved with swapping tasks. I haven't seen too many projects that use the feature but the Raspberry PI comes with a camera interface. Maybe I would take a hybrid approach and use some kind of microcontroller, maybe the STM32F to handle the mechanics of the project, whatever they may be. Switch inputs, motor outputs, whatever.
There are lots of pins for talking to the outside world! Then I would stack a Raspberry PI on top to handle everything else. The PI could handle the UI, networking, camera, high level decision making and the two processors could communicate over SPI. Invent a simple language for communicating. Obviously, this isn't the cheapest possible solution. A Blackfin would provide that.
But it balances the strengths of platforms and the amount of original code that needs to be written from scratch. Hi, I have built a couple of products using Segger emWin and an open source u8g. These projects can become quite complex quickly, a couple of the larger issues i had: 1. Say you want to add bitmaps and so on, you need to add external flash to the micro controller and deal with executing code from external flash. Fine, but does take a little while to get the code production ready. Then with the larger bitmaps wanted by the marketing department you get a comment that the whole screen loads rather slowly.
Even at QVGA resolution, SPI connection is not fast. So you then connect the display via parallel and want to use DMA. Then you find the exactly display is not longer available (they have very short production runs in china) so for an equivalent display you need to spend time changing the driver code for the new glass.
Grrr Actually now i have a complex front end to make and came to the conclusion that Win 10 IoT is a very good option for the following reasons: 1. The UI is built using world class tools, Visual Studio 2015 has all of the features you could want to build a UI. Much better then any embedded IDE 2. The UI is built using XAML and supports asynchronous actions and events 3. Any of the code can be compiled to also be accessed with a PC app - so if you have to make a PC app with common logic you can use exactly the same code and therefore development and unit testing etc is only done once. The XAML controls are completely customizable. Often with GUI toolkits all the widgets have a fixed look.
Win 10 IoT is on a couple of different boards/chips so there is some spread of sources 6. Tools and GUI is free.
Back in the day we paid a ton of money for embedded GUI 7. XAML GUI is very web orientated way of thinking Incidently I have tried the 4D Systems displays. Although they aren't too bad the tools to build the GUI is very painful and crashy, debug cycle was slow.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |