IT Questions and Answers :)

Wednesday, November 20, 2019

Which company was NOT part of the creation of the Ethernet standard?

Which company was NOT part of the creation of the Ethernet standard?

  • Intel
  • Xerox
  • IBM
  • Digital Equipment Corp 

EXPLANATION

Ethernet /ˈθərnɛt/ is a family of computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN).[1] It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, and has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.
The original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second (Mbit/s)[2] to the latest 400 gigabits per second (Gbit/s). The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet.
Systems communicating over Ethernet divide a stream of data into shorter pieces called frames. Each frame contains source and destination addresses, and error-checking data so that damaged frames can be detected and discarded; most often, higher-layer protocols trigger retransmission of lost frames. As per the OSI model, Ethernet provides services up to and including the data link layer.[3] The 48-bit MAC address was adopted by other IEEE 802 networking standards, including IEEE 802.11 Wi-Fi, as well as by FDDI, and EtherType values are also used in Subnetwork Access Protocol (SNAP) headers.
Ethernet is widely used in homes and industry. The Internet Protocol is commonly carried over Ethernet and so it is considered one of the key technologies that make up the Internet.

DEC, Intel and Xerox through their collaboration to create the DIX standard, were champions of Ethernet, but DEC is the company that made Ethernet commercially successful. Initially, Ethernet-based DECnet and LAT protocols interconnected VAXes with DECserver terminal servers. Starting with the Unibus to Ethernet adapter, multiple generations of Ethernet hardware from DEC were the de facto standard. The CI "computer interconnect" adapter was the industry's first network interface controller to use separate transmit and receive "rings".

 

Share:

Which of the following statements is true?

Which of the following statements is true?

  • SaaS is used by software developers
  • Applications cannot be developed in the cloud
  • PaaS is used by end users
  • PaaS is used to create web services 
Which of the following statements is true?

EXPLANATION

There are dozens of possible ways to develop software. A wide variety of existing programming languages with their own strong and weak sides, frameworks, libraries, and other development tools allow creating applications of different types: for desktops, for mobile devices, web applications that work via browser, etc. Different software development methodologies and frameworks like Scrum or Kanban will help you adapt to the development process for various types of projects, reduce costs, eliminate risks, and use the available resources efficiently. But besides the development itself, there’s another important issue that requires close attention. We’re talking about the way of licensing and delivering the software.
When there’s a need to choose what framework, technology, or a programming language should be used during the development process, in most cases, the development company makes the final decision, and the customer relies on the developer’s experience and skills. But the chosen software delivery model affects both the customer and the developer because different models have different pros and cons and require different development strategy.
In this article, we’ll talk about one of the existing software licensing and delivery models named Software as a Service, or SaaS, to be short, and the key benefits it offers.

Share:

When provisioning new virtual servers, which of the following must you do?

When provisioning new virtual servers, which of the following must you do?

  • Specify an SLA
  • Specify how much RAM the virtual server will use
  • Specify the maximum server log file size
  • Specify cloud backup options for the server 

EXPLANATION

You must specify user credentials used with the new virtual server, and you must specify how much RAM the virtual machine will use. 

Share:

Your public cloud is configured so that additional cloud storage is allocated to a virtual server when the disk space used on that server reaches 80 percent of disk capacity. Which term describes this configuration?

Your public cloud is configured so that additional cloud storage is allocated to a virtual server when the disk space used on that server reaches 80 percent of disk capacity. Which term describes this configuration?

  • Automation
  • Self-service
  • Disk latency
  • Elasticity 
Your public cloud is configured so that additional cloud storage is allocated to a virtual server when the disk space used on that server reaches 80 percent of disk capacity. Which term describes this configuration?

EXPLANATION

 

Hybrid cloud is gaining traction as organizations seek to realize the flexibility and scale of a joint public and on-premises model of IT provisioning while also changing the way their compute and storage infrastructure is funded, transferring costs from a capital expense (capex) to an operating expense (opex). The proportion of organizations with a hybrid cloud strategy grew to 58 percent in 2019, up from 51 percent in the prior year, according to RightScale’s State of the Cloud 2019 report.
Moving your infrastructure to the cloud, however, won’t necessarily guarantee the promised benefits, especially for compute-intensive datacenters. Indeed, there is a danger that the cloud part of a hybrid infrastructure could actually let you down – that performance gains and cost-savings are not realized. That can happen if you don’t optimize your distributed workloads – an area where policy-based automation can really help.

Why Hybrid Cloud?

Hybrid cloud solves one of the biggest problems in high performance computing: a lack of capacity. If you are relying purely on your own computing infrastructure then you face a trade-off between workload capacity and computing resources. Take batch processing jobs, for example; these often entail a complex mixture of workloads with different sizes, required completion times, and other characteristics.
Job schedulers seek to fit as many workloads as possible into the computing infrastructure, but job volumes and sizes aren’t always regular. Too many of them at once create demand spikes that may exceed resources. What happens when there are more workloads with specific machine requirements than there are machines, and those jobs have hard deadlines or high priority? Buying more computers to satisfy rare but critical spikes in demand isn’t the answer. That equipment could languish with lower utilization during workload volume valleys.

The Cloud Bursting Challenge

This is where the concept of hybrid cloud comes in because it lets you extend your existing, on-premise resources. The drivers for this are, typically, the need to ensure additional capacity for peak workloads, to provision more specialized resources than those usually required, or to spin up new resources for special projects. Unlike cloud-only, hybrid cloud uses a common orchestration system and lets administrators move data and applications between the local and remote infrastructures. By moving workloads dynamically to the cloud when local resources are insufficient, companies can build elasticity into their computing environments while avoiding extra capital outlay.
This provides several advantages: users can complete jobs more quickly, and also gain access to resources they might not have locally, such as GPUs, fast block and object storage, and parallel file systems.
While this is all great stuff, the practice of actually implementing bursting can be complex. For example: how can you ensure that you’re sending the right jobs to a cloud environment? With so many computing jobs of different sizes and types, making the best use of your local infrastructure and remote cloud resource is like playing a multi-dimensional game of Tetris.
Get it wrong, and you’ll end up paying too much for your hybrid cloud than you had planned. Moor Insights & Strategy highlights several examples of how costs can run out of control in hybrid cloud environments. Examples include budgeting for ideal capacity without allowing for uncertainty and also forecasting a higher use of the infrastructure than you deliver – thereby paying for unused capacity. You might miss smaller costs beyond compute and storage, such as data transfers, load balancing, and application services. Another common cause of cost overruns is failing to de-provision cloud resources once you’ve finished with them. Hybrid cloud customers can also fall into the trap of using higher-cost platform services such as proprietary public cloud data storage, instead of infrastructure services running open source software on compute and storage services. It can all add up to an unnecessary dent in your budget.
These issues should be top of mind if cost management is – as it should be – a compelling reason for using the cloud in a hybrid setting. RightScale’s 2019 State of the Cloud report found that 64 percent of cloud customers in 2019 on average prioritized optimizing their cloud resources for cost savings. This makes it the top initiative for the third year in a row, increasing from 58 percent last year. The number is highest among intermediate cloud users (70 percent) and advanced (76 percent), confirming that the more sophisticated your use of cloud resources, the more complexity you face.

Automate For Efficiency

Automation promises to help manage cloud usage and do away with some of these potential problems. An orchestration system that manages workloads can automatically play that game of workload Tetris for you, and – if done correctly – ensure optimal results. Such a system involves more than a job scheduler, however. These have been around for decades and tend to focus on resource efficiency and throughput of a single, static compute cluster. That means they can be inflexible and reactive, managing batch scheduling according to workload or project priorities or other parameters. Schedulers fail to take into account that cloud resources can be allocated and sized dynamically.
More modern cloud management systems serve multiple computing platforms and have been extended to seamlessly integrate on-prem with cloud. The most mature of these can tie job allocation to application performance and service levels. These systems can ensure that a user or department only gets the level of service that they agreed to, sending resource hogs further down in the queue.
Importantly, these tools can help you manage your costs. They do this by tagging resources when they’re provisioned so that admins know what machine instances are in play and what they are being used for. They can alert operators to such things as instances in the hybrid cloud that have gone unused beyond a certain threshold.
Such tools can also link to a dashboard that will let you peek into your level of cloud usage and see just how much your virtualized resources are costing. Ideally, admins should be able to drill down into usage and expenditure data on a per-project basis.
Rather than simply reporting back on what’s happened, this category of tools can be used to enforce policies to control what you’re spending on a per-project – or even a per-user basis – providing alerts when usage is nearing a pre-set budget threshold.

Writing The Rules

Of course, the best rules don’t come out of a box, and somebody has to write them in the form of policies that the cloud manager can use to determine a course of action. Such policies should take several factors into account. For example, applications that are allowed to burst to the public cloud versus those that are not, software licenses allowed to run on only local servers, and data that cannot move beyond the datacenter for compliance or security reasons.
When writing the rules, you also need to be aware of the technical factors that can stymie a job’s journey to the cloud in order to overcome them. These factors include the fact that a workload may rely on output from some prerequisite jobs that are running locally, meaning it must wait to execute – or short-running workloads may take too long to provision the cloud-based resources that it needs. Spinning up a machine instance to handle a job may take two minutes, while uploading the data it needs may take five minutes. If the job ahead of it in the local system will finish in 30 seconds, then it makes sense to wait rather than sending the workload to the cloud.
Another factor that a policy could draw on is the direction and pacing of the workload. A policy could decide how many remote cloud server instances to spin up or spin down, based on whether the number of scheduled jobs is growing or shrinking, and how quickly.
Ultimately, this kind of thinking can help you to deliver a “reaper policy” that deletes server resources at the end of a cloud-based job. The reaper policy helps keep the use of unnecessary cloud resources to a minimum, but if the delta between the workloads in the queue and the available local resource continues to grow, it may wish to keep some machine instances available for new jobs, so it doesn’t have to waste time starting new ones.

Rightsizing The Infrastructure

There is yet another dimension to consider: rightsizing. The cloud manager must spin up the appropriate machine instance in the hybrid cloud to enable the scheduler to dispatch the job. Sizing accuracy is important in the cloud where you pay for every processor core and gigabyte of RAM used.
Admins should therefore use their own custom (and internally approved) instances rather than relying purely on the cloud service provider’s own default configurations. You should take care to match the core count and memory in those virtual servers to the size of the job so that you aren’t paying extra for computing and memory resources that aren’t needed.
Although cloud computing jobs often specify machine instance requirements themselves, savvy admins won’t take that at face value. Instead, they will use runtime monitoring and historical analytics to determine whether regular jobs use the resources they ask for. Admins finding a significant difference may specify server instances based on historical requirements, rather than stated ones.
The ideal situation here is a NoOps model, in which policies automate operations to the point where there is little administrative intervention in an increasingly optimized system. The factors contributing to policy execution are many, varied, and multi-layered while the policies themselves can be as complex as an admin wants to make them. The more complex the policies, the more important it is to test and simulate them to ensure that they operate according to expectations.
Policy-driven cloud automation is the key to the successful use of hybrid cloud because it provides a way to tune infrastructure and avoid burning cash on unnecessary cloud resources, such as machine instances and cloud storage. It’s the next step towards a mature use of cloud away from simple, discrete jobs to an environment that’s an integral part of your data processing toolbox.
Share:

Which of the following terms best describes a cloud service's ability to rapidly increase user accounts?

Which of the following terms best describes a cloud service's ability to rapidly increase user accounts?

  • Volatility
  • Viability
  • Elasticity
  • Synchronicity 

EXPLANATION

 In cloud computing, elasticity is defined as "the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible"

Share:

Popular Posts