-
Integration of Microgrids and Nanogrids in Smart Grid
Lara Bannister, Aaron Potter, Zachary Wertz, Michael Forster, and Matthew Smith
Lara Bannister, Aaron Potter, Zachary Wertz, Michael Forster, Smith, ENT466: Electrical Design
Faculty Mentor(s): Professor Ilya Grinberg, Engineering Technology
A microgrid is defined as a group of distributed energy resources, including renewable energy resources and energy storage systems, as well as loads that operate locally as a single controllable entity. The goal of microgrid systems is to have the utility feeding one or more microgrids, which may be large buildings such as hospitals or campuses. These microgrids would have power generation and storage capabilities enabling them to go into “island mode." Island mode refers to when the microgrid is completely independent from the utility grid, this feature is useful for reducing the utility grid's load during peak hours and in the event of maintenance or a fault on the lines connecting the utility grid to the microgrid. Nanogrids may be connected to the microgrid. The nanogrid operates on a similar principle of being able to self-sustain during the previously mentioned scenarios. In a nanogrid, one customer would have solar panels and be interconnected with other houses in that nanogrid, enabling the home with solar panels to give other homes power once the original home's demand is met. This microgrid/nanogrid integration system is a small-scale emulation of a real-world scenario, allowing for laboratory testing and data analysis. The end goal of the project is to replicate a practical system in which the utility, micro, and nanogrid will work in conjunction with one another as efficiently as possible. -
Hand Gestures Sensor
Som Dhital, Jose Rodriguez, Prem Kafley, and Dylan Woodling
Som Dhital, Jose Rodriguez, Prem Kafley, Dylan Woodling, ENT466: Electrical Design II
Faculty Mentor(s): Professor Ilya Grinberg, Engineering Technology, Professor Ken Pokigo, Engineering Technology, Professor Darold Wobschall, Engineering Technology
This project utilizes two ultrasonic sensors: HCSR04 and an Arduino UNO microcontroller. The two ultrasonic sensors are connected to the Arduino UNO, and from there to the computer. A computer code is sent to the Arduino to interface with the sensors. These sensors detect a gesture by a person's hand and have a reaction on the computer. Such reactions may include adjusting volume controls, or changing slides on Microsoft PowerPoint. Our research may potentially be developed to limit the amount of person-to-surface contact and reduce the spread of viruses. Another purpose for this project may benefit the elderly, or persons with disabilities, who may experience difficulty adjusting volume controls, scrolling on a web browser, or playing/pausing a video. The ultrasonic sensors can only sense distances, therefore, a method has been developed to detect the different gestures in addition to distance to the object. -
Industry 4.0/IoT Demo
Karl Dorcelian, Zachariah Mayo, Fakhri Alameri, Elton Mensah-Selby, and Ryan Borkowski
Karl Dorcelian, Zachariah Mayo, Fakhri Alameri, Elton Mensah-Selby, Borkowski, ENT466: Electrical Design II
Faculty Mentor(s): Professor Ilya Grinberg, Engineering Technology, Professor Mike Haake, Kaman Automation, Steve Klein, Kaman Automation
Industry 4.0 and IoT are new strategies that changed and upgraded the way we interact with digital technology throughout the industrial and manufacturing world. The project aims to use an IoT/Industry 4.0 demonstration to launch a pilot study to make an easy and efficient data exchange, provide accurate information through potential manufacturing technologies, solar/wind technology and/or other useful machinery. In this context, Industry 4.0 is also known as the fourth industrial revolution, which integrates industrial practices and traditional manufacturing with the use of modern-day smart technology. IoT (Internet of Things), is a network of physical objects integrating sensors, software, and other advanced technologies that allows data exchange with many other devices and systems over the Internet. The approach being used in the project is to connect Programmable Logic Controller (PLC) and motor drive connected to the Ethernet. The Ethernet acquires data from the PLC and send the data to Blue Open Studio (BOS) and MGuard. Through BOS, reports and motor operation logs, are created based on the data from the PLC acquired through the Ethernet. The MGuard Cloud serves as a highly secured web-based service for instant remote access and transfer of information to Machine Advisor. The result of a successful build with this project is to accurate data output of a wind-powered generator, solar output data, and/or output data from a hydro-powered generator. Another outcome of the successful project is to monitor electric machines data anywhere in the world via Secure Phoenix Connection Device. -
Buffalo Baja Bengals (Baja SAE)
David Figueroa
David Figueroa, ENT422: Machine Design II
Faculty Mentor(s): Professor Jikai Du, Engineering Technology
Baja SAE® (Society of Automotive Engineers), which began in 1976, sponsors annual competitions that simulate real-world engineering design projects and their related challenges. Engineering students are tasked with designing and building an off-road vehicle that will survive the severe punishment of rough terrain. Each team's goal is to design and build a single-seat, all-terrain, sporting vehicle whose structure contains the driver. SUNY-Buffalo State College has been competing in the Baja SAE for over 20 years, finding new concepts to make our new vehicle better each year. This year's competition has currently undergone an alternative format because of Covid-19. This alternative split the competition into two separate competitions: (1) Baja SAE Knowledge Event (completely virtual), and (2) Baja SAE Validation Event (virtual & in-person). Due to SUNY's Covid-19 policy, we are only taking part in the Knowledge Event, for which we have transformed our vehicle design from a RWD to a 4WD. Our vehicle's 10HP engine powers our Gearbox, which provides speed and torque conversions to the Front and Rear differential to activate all 4 wheels with the support of a Driveshaft. The Driveshaft in between the Gearbox and Front differential has a component known as a Switch-Out, similar to a Transfer case but much lighter. This Switch-Out will let the driver switch from 2WD to 4WD manually. -
Are Those Ants?
Karanveer Gill
Karanveer Gill, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
This project will be studying how to detect people in images who appear miniscule. Small people can be identified as low as 20 pixels. This project has some inspiration from satellite images. It can almost replicate how a satellite image can detect a person. For example, if a picture was taken from a satellite, the identification of small persons will only be beneficial for investigations. This project will be written by using Python language and Jupyter Notebook will be used as the Integrated Development Environment (IDE). Windows Docker will be utilized to access a COCO annotation. This will be accessed within the Python program, referencing a JSON file. A dataset will be downloaded with help from this COCO annotator. This data annotation will be designed to detect tiny persons. The purpose of the project is to immediately detect persons in a picture that are not in a crowded area. Images will mainly be aerial shots where the people in the image will look tiny in size. There will be an indication that the object, in this case the “tiny-person,” is identified. This project will also attempt to zoom-in the photo, almost to replicate a zoomed-in camera or satellite image. With the identification of tiny people, the object detected will turn out to be human. An objective of this research is to mimic a satellite image or camera identifying potential suspects. -
Person Re-Identification: Tracking the World
Mac Johnson
Mac Johnson, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
Person Re-Identification is the use of multiple videos or camera angles to track individuals over time. Someone walking down the street might be in several camera angles at once, but the task is to combine the videos or snapshots to show it is the same person. One problem with this idea is that pictures can have many different variations that affect the accuracy. The plan is to use multiple training libraries to find if this improves identification. Person Re-Identification is a rising topic in the Cybersecurity field and holds much potential for progress. PyCharm will be used as the IDE for this program because all the learning processes are stored in libraries in Python code. There are multiple options regarding which learning dataset to use, among them CUHK01, iLIDS-VID, and RPIField. The libraries will train the program with artificial intelligence to track a person and store images of them for future queries. Since most of the time libraries are used individually, there should be an increase in accuracy by combining them. The expectation is to see a clear advantage of training methods when used together and also identify the single most efficient library. The goal is to find a different combination of these training techniques to allow the program's artificial intelligence to be more accurate and adaptable. The results of this research project will show the differences and benefits of using multiple image libraries. -
Stovetop Overheat Sensor Project
Christopher Lonczak, Monte Perkins, Michael Adanri, and Kamali Henry
Christopher Lonczak, Monte Perkins, Michael Adanri, Kamali Henry, ENT466: Electrical Design 2
Faculty Mentor(s): Professor Ilya Grinberg, Engineering Technology, James Heimburger, Northrop-Grumman, Professor Darold Wobschall, Engineering Technology and ESensors
The Stovetop Overheat Sensor is part of a larger home monitoring system for assisting the elderly. The objective of the project is to monitor the elderly in their everyday home activity. It is known that the elderly population is more susceptible to a variety of significant medical conditions. These medical conditions may include Alzheimer's disease, Huntington's disease, dementia, and depression. The project utilizes three separate sensors working together to monitor the overall status of a cooking area being used. The sensors being used include a volatile organic compound (VOC) sensor, air temperature sensor and infrared sensor. The VOC sensor detects volatile organic compounds which can be the result of burning, the infrared sensor detects heat being radiated by the cooking surface such as a pot or pan, and lastly the air temperature sensor detects the temperature of the air in the vicinity of the stovetop. The project uses the PIC16F18446 microcontroller to control the operation of the three sensors. The microcontroller is used to receive and process the data retrieved from the sensors and checks them against a set of predetermined parameters to ensure the safe operation of the cooking area. If the data from the sensors reach the predefined parameters, possible or immediate danger will be indicated and the system will then set off an alarm alerting the user of the danger. This project is a subsystem of a larger home monitoring system being developed by ESensor company and is intended to function as one of the main threat detection subsystems. -
Text Based Adventure Game
Jovannie Lopez
Jovannie Lopez, CIS435: Python Programming
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
This research project will develop a text-based adventure game. The game will be created using Hyper Text Mark-up Language (HTML) scripts combined with programming in Python language. This game design project will include many paths and pictures to accompany different events that occur while playing. The primary goal is to combine the knowledge gained for HTML with Python programming language to put the game on a website on the Buffalo State server. This would take basic HTML and that would allow to focus in on the Python parts for the adventure game to make it expansive and have many different routes. The purpose for conducting this research project is to study the ability to work with Python programming and to see ways that it can be implemented and augmented with other coding languages. Some methods that can be used for developing this project are to research extra connections possible between Python and HTML, and to expand the website to look well-polished through Cascaded Style Sheets (CSS). The game is to be about the player's journey through a labyrinth. The players start out with no knowledge as to why or how they were put into this predicament, but as the game proceeds, players meet other characters in the labyrinth in order to learn about what is going on. -
Face Recognition and Smile Detection
Devanshi Malaviya
Devanshi Malaviya, CIS435: Python Programming
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
Face recognition in real time is a popular subject of research and a rapidly growing challenge. Emotion recognition by focusing on various parts of the face and speech tones is also an exciting field. In this project, my focus is on face recognition and smile detection. I am planning to build a camera based on a real-time face recognition system that can detect if a person is smiling or not and display a message to that effect. The Python packages used in developing this project are HaarCascade, OpenCV, LBPH, numpy and PIL. The face recognition is done in 3 steps, starting with face detection, then using the LBPH algorithm for identification and verification of facial image. The LBPH algorithm uses a pixel matrix on a grayscale image to give new binary value to each cell and in the end, it produces a new image which represents better characteristics than the original picture. The HaarCascade algorithm used for face and smile detection uses a set of elementary combinations of dark and bright areas by edge, linear and central features. In the future, I plan to expand this project and implement it in video-calling software applications. It will use automatically generated captions, along with face and emotion recognition, and generate a transcript or a summary of a meeting after it ends. -
Traffic Light Control with Reinforcement Learning
Devanshi Malaviya
Devanshi Malaviya, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
The rapid increase in automobiles in the past few years has led to traffic congestions all over the world. This forces drivers to sit idly in their cars wasting time and fuel. Current traffic light control policies are not optimized, which leads to people waiting in their cars for nonexistent traffic and more extended travel time than necessary. In the United States, motorists spend an average of nearly 100 hours in traffic congestions per year. The current research project focuses on reinforcement learning to optimize traffic flow to reduce travel time of drivers. It can be done by building an environment where every intersection has knowledge about the number of the vehicles and their speed as they approach the intersection. Simulation of Urban Mobility (SUMO) is used to build a traffic simulator. Reinforcement learning works on state and action policies which allow traffic lights to make optimized decisions based on their current state. It will balance the exploration and exploitation to make sure that the model is not overfitting and every lane is given importance according to how busy it is. For every state, it receives a reward if it reduces travel time, and the goal of the model is to collect as many rewards as possible. Therefore, the project will conclude by attempting to obtain the most optimized simulation. The Python packages used in the project are Keras, Tensorflow and OpenAI. -
Quick Browse
Pa Reh
Pa Reh, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
The goal of this research project is to create a Google Chrome extension called “Quick Browse,” which will make navigation easier for all users on Chrome to browse web pages in less time. Google Chrome has been phenomenal, and many people prefer Chrome over other web browsers because it is very user-friendly. However, the browser does not have all the best features available, so it is important to take advantage of the Google Chrome Extension tool to create the necessary changes. This is why “Quick Browse” is needed. This problem is important to fix because many users waste countless hours on the browser clicking around webpages. The “Quick Browse” will speed up the process for the user to get to their destination webpage more efficiently. It will allow a user to accomplish more work without having to spend so much time clicking around the browser. The technology involved in this project will be HTML, CSS, JavaScript, Jason, and Photoshop. Photoshop is used to create an image, appropriate for the icon of the extension. HTML and CSS are used to mainly display what is on the browser. JavaScript and Jason will be the code that works behind the scenes of the Extension. The result will save user time by allowing access to the destination webpage with a simpler process that eliminates the need to type in the web address. -
COSMOS: Computer On-Board Scientific Mobile Observatory System
Aseel Shaibi, Madison Skinner, Daniel Sakona, and Tek Powdyel
Aseel Shaibi, Madison Skinner, Daniel Sakona, Tek Powdyel, ENT466: Electrical Design II
Faculty Mentor(s): Professor Ilya Grinberg, Engineering Technology, Professor Jon Battison, Industrial Advisor, Professor Jonathan Rosten, Engineering Technology
The COSMOS (Computer On-board Scientific Mobile Observatory System) project is a small-scale robot system of the Mars Rover used for the University Mars Rover Competition. This competition is the world's premier robotics competition for college students challenging student teams to design and build the next generation of Mars rovers that will one day work alongside astronauts exploring the Red Planet. Our team of 2020-2021 designed and built a rover platform prototype. The NI Multisim 14.2 was utilized to simulate the Speed Controller and Joystick Subsystems as an initial step. Data recorded from instruments measurements include percent duty cycle, rise time, voltage, and current requirements for each component and entire robot system. Utilizing research of scholarly literature and technical documentation, the team designed the robot to be capable of maintaining constant velocity, turning all six wheels in the forward, backward, left, and right directions with user-controlled joysticks, with capabilities to handle severe temperatures. Based on voltage/current requirements, the robot system is set to operate at 12 Volts DC with a capacity of 2 Amp-hour. Subsystems designed by each team member (joystick-controller, motor-driver, power distribution, and parts installations) are combined as one system and are tested out. -
Vehicle Dash Cameras with Artificial Intelligence
Matthew Stranz
Matthew Stranz, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
The purpose of this research project is to create a vehicle dash camera that assists the user, police, and insurance company in easy license plate recognition and number retrieval in an event of an accident and/or hit and run. Presently, personal vehicle dash cameras are very limited and only record data. Since accidents are very chaotic, it can be difficult extracting the required information. To implement these specific aspects, the vehicle license plate will be detected by YOLOv4, a real-time object recognition system and machine learning model. Following detection, the number within the license plate will be filtered through OpenCV real-time computer vision and printed with Tesseract OCR optical character recognition engine. Currently, this is a software capability that is only found in government, police, drone, and stationary security cameras. Supporting software for this research will be with Google Collaboratory, Jupyter Notebook, Anaconda (Miniconda for Raspberry Pi), Nvidia CUDA Toolkit 10.1, and Git. Additionally, the hardware that would be needed is a Nvidia GPU, Raspberry Pi 4 Model B, a storage device micro SD card, Arducam day and night vision camera, Raspberry Pi 7" touch screen display, USB to USB Type-C with 15W car charger, and a housing unit that would incorporate the hardware along with a mounting for the windshield or the mirror. The final goal is to have all license plate numbers be enlarged and printed above the license plate, and all detections be saved onto the storage device. -
A Ruff Day for a Dog Salon: A Way to Collect Data
Alexander Wagner
Alexander Wagner, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
The problem that exists in every company is how to collect and store user data, then keep it safe under a login authenticator. Using Django, Python and SQL, organizations can construct a simple website that allows users to enter, collect, and show data that they've collected. This research project is meant to show how this process may be applicable for small businesses. The project rationale serves the needs of small businesses with limited staff, whose customer information and other data can be lost very easily. Prior research was done to address this problem, but the difficulty was that the application was hosted on a single computer shared by multiple employees. The solution is to launch a website and back-end data storage using Django so the employees can record information with their device of choice. The way this can be done is with Django to set up an IP address that employees can connect to, a form that can be created with Python, and a database that can store the required information. Also, some tools to help download the database in different forms, such as comma separated values and a text document, are used. One can expect an application that runs on a website that is secured with a login client and a means of data collection. -
Deep Learning API with Python
Salah Zahran
Salah Zahran, CIS494: Research in Computer Information Systems
Faculty Mentor(s): Professor Sarbani Banerjee, Computer Information Systems
Deep Learning is a subset of Artificial Intelligence and Machine Learning, a type of technology that can think intelligently similar to humans. The concept involves a deep processing and analyzing of data which leads computers to make the best possible decision based on patterns and implications in the data analysis. Deep learning takes this concept to the next level by creating a similar structure to neural networks in the human brain and applies this concept to its models. It involves creating artificial neural networks in which there are multiple layers consisting of nodes, each of which contains a small fraction of all the data input. Deep Learning applications are very commonly written in Python, so that is used for this project. Other software used is Django, a back-end framework for Python that is widely used for creating APIs and back-end web applications. This project will create an API with Django and use it to run any Deep Learning models. API (Application Programming Interface) provides users with an interface that serves as a middle ground between users and the backend. They offer versatile ways of interacting with web applications and make it extremely easy for users to interact with back-end apps as well. The project consists of coding an API that can accept Deep Learning models as input and will then run and output the model's results in a convenient and clear way to the user.
Printing is not supported at the primary Gallery Thumbnail page. Please first navigate to a specific Image before printing.